BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Drupal iCal API//EN X-WR-CALNAME:Events items teaser X-WR-TIMEZONE:America/Toronto BEGIN:VTIMEZONE TZID:America/Toronto X-LIC-LOCATION:America/Toronto BEGIN:DAYLIGHT TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 DTSTART:20200308T070000 END:DAYLIGHT BEGIN:STANDARD TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 DTSTART:20191103T060000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT UID:682b25eb269ef DTSTART;TZID=America/Toronto:20200917T160000 SEQUENCE:0 TRANSP:TRANSPARENT DTEND;TZID=America/Toronto:20200917T160000 URL:/statistics-and-actuarial-science/events/department -seminar-neil-spencer-carnegie-mellon-university SUMMARY:Department seminar by Neil Spencer\, Carnegie Mellon University CLASS:PUBLIC DESCRIPTION:Summary \n\nA NEW FRAMEWORK FOR MODELING SPARSE NETWORKS THAT M AKES SENSE (AND CAN\nACTUALLY BE FIT!)\n\nLatent position models are a ver satile tool when working with network\ndata. Applications include clusteri ng entities\, network visualization\,\nand controlling for unobserved caus al confounding. In traditional\ntreatments of the latent position model\, the nodes’ latent positions\nare viewed as independent and identically d istributed random\nvariables. This assumption implies that the average nod e degree grows\nlinearly with the number of nodes in the network\, making it\ninappropriate when the network is sparse. In the first part of this\nt alk\, I will propose an alternative assumption—that the latent\nposition s are generated according to a Poisson point process—and\nshow that it i s compatible with various levels of network sparsity. I\nwill also provide theory establishing that the nodes’ latent\npositions can be consistent ly estimated\, provided that the network\nisn't too sparse.  In the secon d part of the talk\, I will consider\nthe computational challenge of fitti ng latent position models to large\ndatasets. I will describe a new Markov chain Monte Carlo\nstrategy—based on a combination of split Hamiltonian Monte Carlo and\nFirefly Monte Carlo—that is much more efficient than t he standard\nMetropolis-within-Gibbs algorithm for inferring the latent po sitions.\nThroughout the talk\, I will use an advice-sharing network of\ne lementary school teachers within a school district as a running\nexample.\ n\nPlease note: This talk will be hosted on Webex. To join please click\n on the following link: Department seminar by Neil Spencer\n[https://uwat erloo.webex.com/uwaterloo/onstage/g.php?MTID=e15ee553cf6b5ab678d359aa7d122 a3ca].\n DTSTAMP:20250519T123659Z END:VEVENT END:VCALENDAR