BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Drupal iCal API//EN X-WR-CALNAME:Events items teaser X-WR-TIMEZONE:America/Toronto BEGIN:VTIMEZONE TZID:America/Toronto X-LIC-LOCATION:America/Toronto BEGIN:DAYLIGHT TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 DTSTART:20190310T070000 END:DAYLIGHT BEGIN:STANDARD TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 DTSTART:20191103T060000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT UID:6829f865c7d33 DTSTART;TZID=America/Toronto:20200130T100000 SEQUENCE:0 TRANSP:TRANSPARENT DTEND;TZID=America/Toronto:20200130T100000 URL:/statistics-and-actuarial-science/events/department -seminar-hyukjun-jay-gweon-western-university SUMMARY:Department seminar by Hyukjun (Jay) Gweon\, Western University CLASS:PUBLIC DESCRIPTION:Summary \n\nBATCH-MODE ACTIVE LEARNING FOR REGRESSION AND ITS A PPLICATION TO THE\nVALUATION OF LARGE VARIABLE ANNUITY PORTFOLIOS\n\nSuper vised learning algorithms require a sufficient amount of labeled\ndata to construct an accurate predictive model. In practice\,\ncollecting labeled data may be extremely time-consuming while\nunlabeled data can be easily a ccessed. In a situation where labeled\ndata are insufficient for a predict ion model to perform well and the\nbudget for an additional data collectio n is limited\, it is important\nto effectively select objects to be labele d based on whether they\ncontribute to a great improvement in the model's performance. In this\ntalk\, I will focus on the idea of active learning t hat aims to train\nan accurate prediction model with minimum labeling cost . In\nparticular\, I will present batch-mode active learning for regressio n\nproblems. Based on random forest\, I will propose two effective random\ nsampling algorithms that consider the prediction ambiguities and\ndiversi ties of unlabeled objects as measures of their informativeness.\nEmpirical results on an insurance data set demonstrate the\neffectiveness of the pr oposed approaches in valuing large variable\nannuity portfolios (which is a practical problem in the actuarial\nfield). Additionally\, comparisons w ith the existing framework that\nrelies on a sequential combination of uns upervised and supervised\nlearning algorithms are also investigated.\n DTSTAMP:20250518T151029Z END:VEVENT END:VCALENDAR