When building LLM-powered agents, I kept running into the same problem:
Most frameworks feel heavier than the actual workflow logic.
Graphs, nodes, planners, routers, memory managers…
All useful, but for many real projects, I just wanted to express:
- do step A
- then step B
- maybe loop
- maybe branch
So I tried a different direction:
what if an agent workflow is just normal async Python functions, composed together?
That idea became PicoFlow — a tiny, async-first DSL for AI agent workflows.
Why I Didn’t Want Graph-Based Frameworks
Frameworks like LangChain and CrewAI are powerful, but they come with tradeoffs:
| Problem | What I experienced |
|---|---|
| Heavy abstractions | You think in framework concepts, not code |
| Debugging friction | Stack traces jump across internal layers |
| Overkill for small agents | Simple flows still require complex setup |
For small and medium workflows, this felt unnecessary.
I wanted something closer to:
- normal async functions
- explicit control flow
- minimal runtime magic
Design Principle: Workflow = Function Composition
In PicoFlow, each step is just an async function:
from picoflow import flow
@flow
async def step_a(ctx):
return ctx.with_output("hello")
@flow
async def step_b(ctx):
return ctx.with_output(ctx.output + " world")
And composition is just:
pipeline = step_a >> step_b
No nodes. No graphs. No planners.
Just functions.
LLM Is Just Another Step
Calling an LLM is also just a flow:
from picoflow import llm
LLM_URL = "llm+openai://api.openai.com/v1/chat/completions?model=gpt-4.1-mini&api_key_env=OPENAI_API_KEY"
agent = step_a >> llm(
"Answer in one sentence: {output}",
llm_adapter=LLM_URL
)
Key ideas:
- Prompt is a template
- Context is explicit
- LLM backend is configured via URL-style adapter
So switching providers does not affect your workflow code.
Control Flow Without Framework Magic
Because everything is Python, you can express loops and conditions naturally.
Example: repeat until done:
from picoflow import Flow
def repeat(step: Flow):
async def run(ctx):
while not ctx.done:
ctx = await step.acall(ctx)
return ctx
return Flow(run)
Then:
agent = repeat(thinking_step >> acting_step)
No custom DSL.
No hidden schedulers.
Just async code.
LangChain vs PicoFlow: A Concrete Comparison
Let’s compare a very simple task:
Take user input, ask the LLM to summarize it, and return the result.
LangChain (simplified)
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
llm = ChatOpenAI()
prompt = PromptTemplate.from_template(
"Summarize in one sentence: {text}"
)
chain = LLMChain(llm=llm, prompt=prompt)
result = chain.run(text=user_input)
Already this requires understanding:
- LLM wrappers
- prompt objects
- chain abstractions
When workflows grow, routers, memory, and tools quickly add more layers.
PicoFlow
from picoflow import flow, llm
@flow
async def input_step(ctx):
return ctx.with_input(user_input)
agent = input_step >> llm("Summarize in one sentence: {input}", llm_adapter=LLM_URL)
result = agent.run().output
What’s different:
- no separate chain objects
- prompt inline where it is used
- workflow is explicit Python composition
Debugging is also straightforward because stack traces remain inside your own code.
What PicoFlow Is (and Is Not)
It is:
- async-first
- minimal abstractions
- explicit data flow
- easy to debug
It is not:
- a full agent operating system
- a prompt management platform
- a graph orchestration engine
If you want large-scale multi-agent coordination, LangGraph may be a better fit.
If you want simple, readable, hackable workflows, PicoFlow is designed for that space.
When This Approach Works Best
I’ve found PicoFlow useful for:
- CLI agents
- backend service pipelines
- tool-using agents
- local LLM workflows
- RAG prototypes
Basically: when you want to stay close to normal Python.
Why I Open-Sourced It
This project started as personal tooling while experimenting with agent design.
But I kept rewriting the same patterns:
- flow composition
- retries
- loops
- tracing
So I turned it into a small library instead of another private utility module.
The goal is not to replace big frameworks, but to offer a simpler option when you don’t need all that machinery.
Try It
Repository:
https://github.com/the-picoflow/picoflow
It’s small, readable, and designed to be easy to modify if needed.
Feedback and design discussions are very welcome — especially around:
- DSL ergonomics
- control-flow helpers
- tracing and debugging hooks
Closing Thoughts
Agent frameworks are getting more powerful, but also more complex.
I think there’s still room for tools that prioritize:
- readability
- composability
- low cognitive overhead
Sometimes, less framework is more agent.
