Multiagent learning
Learning and adaptation capabilities enable agent to respond to open, dynamic environments by exploiting new opportunities and avoiding unforeseen pitfalls. When multiple, tightly-coupled agents learn concurrently, assumptions underlying classical machine learning techniques are violated. So, multiagent learning is a challenging and productive problem for both multiagent and learning researchers. We have worked on a variety of multiagent learning techniques including multiagent reinforcement learning (JETAI'98, ICMAS'2000), multiagent case-based learning (ICMAS'96, IJHCS'98), Bayesian network based learning (ML'2000, AAI (to appear)), learning to predict behaviors (AGENTS'99), etc.
Publications From This Project
Yucel, Osman, Chad Crawford, and Sandip Sen. "Evolving effective behaviours to interact with tag-based populations." Connection Science 27, no. 3 (2015): 288-304.
Jolie Olsen and Sandip Sen,"On the rationality of cycling in the Theory of Moves framework"in Connection Science, Vol 26, Issue 2, pages 141--160, April 2014.
Jolie Olsen and Sandip Sen,"Discovery, utilization and analysis of credible threats for 2 X 2 incomplete information games in the Theory of Moves framework,"in Connection Science Vol 26, Issue 2, pages 123--140, April 2014.
Bikramjit Banerjee, Sandip Debnath and Sandip Sen, "Combining Multiple Perspectives," in the Proceedings of the International Conference on Machine Learning'2000 (pages 33-40), held between June 29 and July 2 in Stanford University, CA.
Bikramjit Banerjee, Rajatish Mukherjee, and Sandip Sen, "Learning Mutual Trust," in the Working Notes of AGENTS-00 Workshop on Deception, Fraud and Trust in Agent Societies, pages 9-14, 2000.
Manisha Mundhe and Sandip Sen, "Evaluating concurrent reinforcement learners," in the Proceedings of the Fourth International Conference on Multiagent Systems (pages 421--422), IEEE Press, Los Alamitos, CA, 2000. (Poster paper)
Biswas, A., & Sen, S. (1999, April). "Learning to model behaviors from boolean responses" In Proceedings of the third annual conference on Autonomous Agents (pp. 396-397). ACM.
Sandip Sen & Mahendra Sekaran, "Individual learning of coordination knowledge", Journal of Experimental & Theoretical Artificial Intelligence , 10, pages 333-356, 1998 (special issue on Learning in Distributed Artificial Intelligence Systems).
Thomas Haynes, Kit Lau and Sandip Sen, "Learning Cases to Compliment Rules for Conflict Resolution in Multiagent Systems" , presented in the AAAI Spring Symposium on Adaptation, Coevolution, and Learning in Multiagent Systems, Stanford, CA, March, 1996.
Sandip Sen, Mahendra Sekaran, and John Hale, "Learning to coordinate without sharing information," in the Proceedings of the National Conference on Artificial Intelligence (pages 426-431), Seattle, Washington, July 1994.