Abstract
Autonomous multi-agent systems (MAS) have shown promise in various domains, but their effectiveness is often hindered by limitations to formulate decisions based on their limited knowledge. A critical aspect of MAS is the ability to create believable proxies of human behavior, which can empower interactive applications ranging from immersive environments to rehearsal spaces for interpersonal communication and prototyping tools. Previous research in this field often focuses on training agents with limited knowledge within isolated environments, which diverges significantly from human learning processes, and thus makes it hard for the agents to achieve human-like decisions. The advent of Large Language Models (LLMs) offers a transformative solution, enhancing the communicative and cognitive capabilities of agents within MAS. By integrating LLMs, agents can better understand and generate human-like language, facilitating more sophisticated interactions and decision-making processes. This thesis explores the integration of LLMs into MAS to investigate how this collaboration enhances the overall performance and capabilities of autonomous systems. This study leverages Stanford’s "Generative Agents: Interactive Simulacra of Human Behavior" module. By enabling this system to interact with various open-source LLMs and by applying different prompt engineering techniques and post-processing methods, this thesis tries to refine and enhance the responses generated by these models.