Fractal learning
Overview
FractalLearner is a comprehensive, experimental learning system designed to process and analyze information through recursive, fractal-inspired methods. It supports multiple learning modes, multimedia input (text, images, audio), persistent knowledge storage, and semantic search. This proof of concept demonstrates core functionalities for adaptive, multimodal learning.
Features
Recursive Learning: Processes information in layered, recursive depths with configurable max depth.
Multiple Learning Modes: Supports comparative, critical, creative, and standard analytical learning.
Multimedia Learning: Handles text, image (via CLIP), and audio (via speech recognition) inputs.
Knowledge Persistence: Stores knowledge base and vector database for persistent learning.
Semantic Search: Uses FAISS vector database for similarity-based knowledge retrieval.
Interactive Sessions: Facilitates guided learning with adaptive next-step suggestions.
Integration: Supports integration with external AI systems for knowledge sharing.
Project Structure
fractal_learner/
âââ fractal_learner.py
âââ my_knowledge_base/
âââ knowledge_base.json
âââ vector_db.index
âââ learner_state\_\*.pkl
Prerequisites
Python: Version 3.8+ recommended.
Dependencies: Install required packages:
pip install torch transformers faiss-cpu numpy pillow soundfile speechrecognition
Directory Setup: Create storage directory:
mkdir -p my_knowledge_base
Installation
Clone or create the project directory:
mkdir fractal_learner
cd fractal_learner
Save the provided fractal_learner.py in the project directory.
Install dependencies:
pip install torch transformers faiss-cpu numpy pillow soundfile speechrecognition
Create the knowledge base directory:
mkdir -p my_knowledge_base
Usage
Run the FractalLearner script to start learning or interacting with the system.
Example Commands
Interactive Learning:
from fractal_learner import FractalLearner
learner = FractalLearner(max_depth=6, storage_path=â./my_knowledge_baseâ)
session_id = learner.start_interactive_session(âscience_tutorialâ)
response = learner.interactive_learn(
session_id,
"Explain quantum entanglement in simple terms",
learning_mode="comparative"
)
print(âLearning insights:â)
for insight in response[âinsightsâ]:
print(f"Depth {insight\['depth'\]}: {insight\['insight'\]}")
print(â\nSuggested next steps:â)
for step in response[ânext_stepsâ]:
print(f"- {step}")
Multimedia Learning:
learner.learn_from_image(âphysics_diagram.jpgâ, âQuantum mechanics illustrationâ)
learner.learn_from_audio(âlecture.wavâ)
Semantic Search:
results = learner.semantic_search(âquantum physicsâ, k=3)
print(f"Semantic search results: {len(results)} found")
for result in results:
print(f"- {result\['text'\]} (Similarity: {result\['similarity'\]:.2f})")
Save State:
learner.save_state()
Testing Key Features
Recursive Learning: Observe layered insights generated for input text in different modes.
Multimedia Processing: Test image and audio inputs to see extracted knowledge.
Semantic Search: Query the knowledge base to retrieve relevant insights.
Interactive Sessions: Start a session and provide inputs to see adaptive learning paths.
Persistence: Save and load the learner state to verify knowledge retention.
Customization
Modify max_depth in FractalLearner initialization to adjust learning recursion.
Add new learning modes by extending learning_modes in fractal_learner.py.
Adjust embedding generation in _get_embedding for more sophisticated vectorization (e.g., use sentence transformers).
Customize storage paths or file formats in my_knowledge_base.
Dependencies
torch
transformers
faiss-cpu
numpy
pillow
soundfile
speechrecognition