BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Drupal iCal API//EN X-WR-CALNAME:Events items teaser X-WR-TIMEZONE:America/Toronto BEGIN:VTIMEZONE TZID:America/Toronto X-LIC-LOCATION:America/Toronto BEGIN:DAYLIGHT TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 DTSTART:20190310T070000 END:DAYLIGHT BEGIN:STANDARD TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 DTSTART:20191103T060000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT UID:68724c8dacf1a DTSTART;TZID=America/Toronto:20200122T100000 SEQUENCE:0 TRANSP:TRANSPARENT DTEND;TZID=America/Toronto:20200122T100000 URL:/statistics-and-actuarial-science/events/department -seminar-lin-liu-harvard-university SUMMARY:Department seminar by Lin Liu\, Harvard University CLASS:PUBLIC DESCRIPTION:Summary \n\nTHE POSSIBILITY OF NEARLY ASSUMPTION-FREE INFERENCE IN CAUSAL\nINFERENCE\n\nIn causal effect estimation\, the state-of-the-ar t is the so-called\ndouble machine learning (DML) estimators\, which combi ne the benefit of\ndoubly robust estimation\, sample splitting and using m achine learning\nmethods to estimate nuisance parameters. The validity of the\nconfidence interval associated with a DML estimator\, in most part\,\ nrelies on the complexity of nuisance parameters and how close the\nmachin e learning estimators are to the nuisance parameters. Before we\nhave a co mplete understanding of the theory of many machine learning\nmethods incl uding deep neural networks\, even a DML estimator may\nhave a bias so l arge that prohibits valid inference. In this talk\,\nwe describe a nearly  assumption-free procedure that can either\ncriticize the invalidity of t he Wald confidence interval associated\nwith the DML estimators of some causal effect of interest or falsify\nthe certificates (i.e. the mathemat ical conditions) that\, if true\,\ncould ensure valid inference. Essentia lly\, we are testing the null\nhypothesis that if the bias of an estimator is smaller than a fraction\n$\\rho$ its standard error. Our test is valid under the null without\nrequiring any complexity (smoothness or sparsity) assumptions on the\nnuisance parameters or the properties of machine lear ning estimators\nand may have power to inform the analysts that they have to do\nsomething else than DML estimators or Wald confidence intervals for \ninference purposes. This talk is based on joint work with Rajarshi\nMukh erjee and James M. Robins.\n DTSTAMP:20250712T115245Z END:VEVENT END:VCALENDAR