Roadmap

A quick look at what we’re building next — smarter agents, more chains, and cool features like voice, automation, and cross-chain magic. Stay tuned, it’s only getting better.

1. Creating New Agents

Goal: Expand the variety and intelligence of agents available on the platform.

✅ Existing Agents:

2 Swap agents: Solana (Juvex) and BSC (Oden), Arbitrage agent (Arbitra), A Next Candle Predictor, technical analysis agent (Predix)

🔜 Upcoming Agents:

Sniper Agent (Zyra): Detects and executes new token launches (e.g., pump monitoring)

Liquidation Hunter (Drayn): Monitors DeFi lending protocols for liquidation opportunities

Gas Arbitrage Agent (Voltr): Executes trades based on gas fee volatility

Trend Follower Agent (Fluxa): Uses moving average or RSI-based strategies

Cross-Chain Arbitrage Agent (Xyber): Executes arbitrage between Solana, BSC

2. Multi-Chain Agents (Sui & Aptos Support)

Goal: Expand Aigen's on-chain agent infrastructure to emerging Move-based blockchains, enabling autonomous operations on Sui and Aptos.

  • Launch of On-Chain Agent Infrastructure on Sui and Aptos:

    • Deployment of Aigen’s modular agent framework directly on Move-based chains

    • Secure agent execution with gas-efficient design patterns optimized for Sui/Aptos

  • Implementation of On-Chain Task Automation:

    • Autonomous scheduling and execution of predefined blockchain operations

    • Agent-triggered functions with deterministic outcomes and verifiable logs

  • Native Data Interaction:

    • Real-time indexing and state querying of Sui and Aptos smart contracts

    • Agent-level access to token, NFT, and DeFi protocol data for informed decisions

Example Agent Use Cases:

  • Auto-Swap Agents: Monitor and swap tokens using on-chain DEXs (e.g., Cetus, Econia)

  • Token Minting Bots: Automate creation and distribution of fungible tokens

  • NFT Creator Agents: Deploy and mint NFTs based on user input or scheduled triggers

  • DeFi Executors: Perform lending, staking, or liquidity provision via supported protocols

3. Multi-LLM Model Support

Goal: Allow multiple large language models (LLMs) to power agent reasoning, strategy creation, or user interaction.

  • Integrate multiple LLM APIs (OpenAI, Claude, Groq, Mistral)

  • Assign LLMs to specific roles (e.g., planning vs. reasoning)

  • Allow users to choose the preferred LLM per agent

  • Model fallback & retry mechanism (resilience for failed calls)

  • Benchmark agent behavior by LLM (e.g., GPT-4 vs Claude)

4. Automation & Custom Agents

Goal: Let users design, configure, and automate agents to run independently.

  • Agent creation wizard: allow user-defined conditions (e.g., "buy on X dip", "run every 5min")

  • Visual flow builder for chaining actions (e.g., If A → Then B)

  • Task scheduler (cron, price triggers, time intervals)

  • Agent templates (starter strategies for copy & tweak)

  • Auto-pause or error recovery system

5. CEX Buy/Sell Integration

Goal: Extend agents beyond DeFi to centralized exchanges for broader strategy support.

  • CEX API integrations (Binance, KuCoin, OKX)

  • Embedded wallet delegation or API key management system

  • Cross-venue strategy: Swap on DEX, hedge on CEX

  • Trade execution logging (slippage, latency)

  • Compliance: Alerts or limits on CEX trading thresholds

6. Mobile App

Goal: Let users track, configure, and interact with their agents on the go.

  • Cross-platform mobile app (React Native or Flutter)

  • Agent status dashboard

  • Notifications (push for alerts or trade executions)

  • Secure login with biometric & wallet sync

  • Start/pause agents and adjust basic settings from mobile

7. Voice, Image, and Video Support

Goal: Enable multimodal input for agent instruction, analysis, or data ingestion.

  • Voice-to-command (e.g., "Start arbitrage agent on BSC now")

  • Image recognition (e.g., scan price charts and trigger trades)

  • Video input support for UI interaction walkthroughs

  • Multimodal agent interface (prompting via multiple input types)

  • Future agent idea: “Sentiment scanner” that analyzes YouTube videos or chart screenshots

8. MCP (Media Context Protocol)

Goal: Enable agents to understand and reason over multimodal content (voice, image, video) with contextual awareness.

  • Define internal MCP schema:

    • Context tags (e.g., "financial chart", "crypto tweet", "live market data")

    • Media type categorization (image, video, voice)

    • Source metadata (origin, timestamp, relevance)

  • Implement MCP parser:

    • Extract context and structured signals from media inputs

    • Feed into LLM for enhanced decision-making

  • Integrate with agents:

    • Allow agents to process media as part of their input stream

    • Example: Candle Predictor Agent enhances accuracy with chart screenshots

  • Support event-triggered analysis:

    • Voice alert from Telegram → triggers parsing

    • Image shared → sentiment extraction → signal for agent

  • Logging & transparency:

    • Users can view how MCP context influenced the agent’s final decision

Last updated