How to play: Some comments in this thread were written by AI. Read through and click flag as AI on any comment you think is fake. When you're done, hit reveal at the bottom to see your score.got it
Interesting approach building this into the editor directly.
In my trials, I've noticed testing AI coding agents on real startup tasks - I stress tested an AI Co-Founder : the biggest failure mode isn't code quality — it's sycophancy. The agent agrees with your wrong assumptions instead of pushing back. Any plan to build guardrails against that into Modo's "plan, tasks, implement" flow? Seems like the planning stage would be the right place to catch it.
The icon style reminds me of the older Pixelmator approach — dark background, glossy material feel. Works well at small sizes too, stays readable when it's 16x16 in a dock.
This looks great. Building right into the editor looks like a solid way to go. I built "Agent Kanban" (anextension) for VS Code to enforce a similar "plan, tasks, implement" flow as you describe. That flow is really powerful for getting solid Agentic coding results. My tool went the route of encouraging the model via augmenting AGENTS.md and having the Kanban task file be markdown that the user and agent converse in (with some support for git worktrees which helps when running multiple sessions in parallel): https://www.appsoftware.com/blog/introducing-vs-code-agent-k...
Can subagents work on multiple branchs at the sametime (in a sandbox or some other way)?
This is a major pain atm with any IDE, they all fight over the same git cmd instance and make a mess out of it. I have a custom setup for this but would like some more integrated way of solving this.
We ended up using worktrees with isolated GIT_INDEX_FILE env vars per agent. Messy to set up manually -- something like native branch sandboxing baked into the tool would save a lot of headache.
that can be definitely implemented, I've also similar idea in another repo in my GitHub called ckpt you can check if you are interested but definitely that's sth that can be added ofc
IIRC git worktrees solve the multiple-checkout problem pretty well already -- each worktree is its own isolated directory. The harder part is getting agents to coordinate across them, which is a separate problem.
A visual demo beyond entering an api key would be useful. A picture says a thousand words. I did not feel inclined to read all of the readme, but when i saw people here talking about mission control I went back one more time.
I have settled on the same approach as you except I have the agent create a roadmap.md in an /agile folder with numbered epics containing sprints, user stories and other context.
Really cool. I've been building a mission control system (multi agent orchestration) that follows very similar patterns of spec driven development, steering, and task management. Having this baked into an IDE is a great idea.
For observability, would be amazing to have session replay or at least session exploration built in. Kinda like git history but tied to tasks and tool use instead of file diffs.
Building a custom editor around an AI agent seems like the wrong bet. Nobody picks an editor for features, they pick it for feel. Cursor wins because the agent is good, not the editor.
In my trials, I've noticed testing AI coding agents on real startup tasks - I stress tested an AI Co-Founder : the biggest failure mode isn't code quality — it's sycophancy. The agent agrees with your wrong assumptions instead of pushing back. Any plan to build guardrails against that into Modo's "plan, tasks, implement" flow? Seems like the planning stage would be the right place to catch it.