multiple agents using python & local ai
Google just dropped gemma 3! 1b 4b 12b 27b
pip install ollama
ollama run gemma3:12b
here’s some free ebooks to celebrate then a bit of agentic scripting CLI
War is Math: How to Use Wargaming to Prevent & Win Wars
Financial Cryptography
Quizmaster Point Of Law: Civil Procedure
Negotiable Instruments
Lucifer's Legal Lexicon
CHINESE VOCABULARY
pip install langchain ollama crewai
ollama pull mistral-small:24b
#this model is fairly large and might choke your machine so try a smaller one instead
ollama run mistral-small:24b
#or whatever llm you have locally installed
python crew.py
#here is a simple multiagent texter
from crewai import Agent, Task, Crew
from langchain_community.llms import Ollama # For local models
# Initialize local LLM (using Ollama's Mistral)
local_llm = Ollama(model="mistral-small:24b")
# ===== AGENTS =====
researcher = Agent(
role="Senior Researcher",
goal="Discover groundbreaking insights",
backstory="A curious mind obsessed with data patterns",
verbose=True,
llm=local_llm,
tools=[SearchTools.search_internet] # Add custom tools as needed
)
writer = Agent(
role="Technical Writer",
goal="Create compelling content",
backstory="Transforms complex ideas into clear narratives",
verbose=True,
llm=local_llm
)
reviewer = Agent(
role="Quality Assurance Specialist",
goal="Ensure factual accuracy",
backstory="Detail-oriented fact checker",
verbose=True,
llm=local_llm
)
# ===== TASKS =====
research_task = Task(
description="Investigate AI's impact on climate change mitigation",
expected_output="Bullet-point report with key findings",
agent=researcher
)
writing_task = Task(
description="Create blog post using researcher's findings",
expected_output="1500-word article with references",
agent=writer
)
review_task = Task(
description="Verify all claims and check sources",
expected_output="Marked-up document with corrections",
agent=reviewer
)
# ===== CREW =====
science_crew = Crew(
agents=[researcher, writer, reviewer],
tasks=[research_task, writing_task, review_task],
verbose=2,
process="sequential" # Options: sequential, hierarchical
)
# Execute workflow
result = science_crew.kickoff()
print("Final Output:\n", result)






