I wanted to share a project I’ve been working on that’s designed to test the problem-solving abilities of Large Language Models (LLMs), especially when it comes to breaking down complex problems into more manageable components.
About the Project: The tool is a manual maze solver that visualizes ASCII mazes. After every move, it generates a map with the absolute position. This provides a feedback loop which can be used to engage with models like ChatGPT to understand and reason through maze-solving in real-time.
- Visualizes ASCII mazes of various sizes.
- Provides real-time feedback on the solver’s position after every move.
- Aims to facilitate interactions to test and evaluate LLM’s ability to reason, break down problems, and navigate complex environments.
Repository Link: ASCII_LLM_Maze
I believe it can serve as an interesting testbed for those looking to push the boundaries of what LLMs can achieve. I’d love to get feedback, suggestions, or any insights you might have. Let’s explore the capabilities of LLMs together!