Sanmay has worked with the US Treasury department on machine learning approaches to credit risk analysis, and occasionally consults in the areas of technology and finance.
“Avoid incrementalism and the desire to measure the value of your research by competitive benchmarks and metrics.”
Sanmay Das
For this week’s ML practitioner’s series, Analytics India Magazine (AIM) got in touch with Sanmay Das, whose research interests are in designing effective algorithms for agents in complex, uncertain environments and in understanding the social or collective outcomes of individual behaviour. Sanmay is a computer science professor and serves as the chair of the ACM Special Interest Group on Artificial Intelligence and a member of the board of directors of the International Foundation for Autonomous Agents and Multiagent Systems.
AIM: What got you interested in AI/ML?
Sanmay: After completing high school in India, I came to the US to attend Harvard University, intending to study Physics, but also excited by the opportunity to take courses in many different subjects. I found myself far more engrossed in my Computer Science classes and had decided that would be my field of study by the end of my first year of college. In my second and third years of college, coursework and research with Profs. Charles Elkan (who was visiting from UC San Diego at the time) and Barbara Grosz really drew me into AI and machine learning. I moved a couple of miles down the river to MIT for graduate school, where I was privileged to work with Profs. Tomaso Poggio and Andrew Lo for my PhD. I delved into research at the intersection between computer science, finance, and economics, which was still somewhat of an unusual combination twenty years ago, but has continued to form the core of my research interests since then.
Initially, I was really taken with the idea of understanding intelligence by generating intelligent behaviour (rather than studying systems that already exhibit it). I started working on systems where multiple agents interact, and, even though I am now considerably less optimistic about generating truly intelligent behaviour in my lifetime, I still find the question of how to design agents that can achieve (perhaps more limited) goals in complex, multi-agent environments absolutely fascinating.