How C.R.I.S.P Turned 140+ Ideas into a Community-Driven Shortlist
When we started exploring how to turn raw community ideas into real, fundable projects, one thing became obvious very quickly: we needed a clear, fair, and scalable way to sort through them.
That is why the Community Review and Ideation Selection Process, C.R.I.S.P, was born. It began as a collaborative experiment to strengthen the very first step of Deep Funding’s pipeline: ideation. Over the last few months, C.R.I.S.P and the teams supporting it have been testing how community-submitted ideas can be filtered with clarity, consistency, and fairness so the strongest ideas can eventually become full Requests for Proposals, and ultimately, funded work.
This blog focuses on the first Open Ideation Challenge that went through C.R.I.S.P: what happened, who was involved, what worked, and what we learned along the way.
In the early days, ideation at DEEP was simple but messy. Great ideas were coming in, but there was not yet a robust system to check alignment with Deep Funding’s mission, apply consistent criteria across reviewers, and scale beyond a handful of submissions.
C.R.I.S.P was created to address that gap. Its mandate is to design and run a repeatable review process, apply deeper evaluation after an initial pass, document what is being learned, and rotate membership so no small group holds all the power over time.
If you want the full backstory on why C.R.I.S.P exists and how it was designed, your next read should be Esther Galfalvi’s original post, “C.R.I.S.P Update & Event Announcement” on the Deep Communities blog. This article picks up from there and zooms in on the first Open Ideation Challenge run through C.R.I.S.P.
The Open Ideation Challenge is a new way for anyone, anywhere, to help shape the future of decentralized AI at DEEP. Participants submit a short idea that clearly describes a meaningful problem and an AI-driven solution.
If an idea passes the filters and wins community support, it can become an RFP and, eventually, a fully funded project.
In this first C.R.I.S.P-filtered cycle, ideas were submitted through the ideation platform. Ideas that passed the full process and the community vote were eligible for USD $1,000 each, plus an extra $1,000 if they later became full RFPs.
By the time we reached the finale, the scale of participation was clear: around 150 ideas were submitted, 9 finalists were selected for the finale, and 3 winners were chosen to move forward as the community’s top picks.
This challenge brought together multiple groups working in sequence.
Idea submitters included builders, researchers, and community members who proposed AI-driven solutions.
The Pre-C.R.I.S.P team handled the first filter, applying simple yes or no criteria to quickly check alignment and baseline feasibility.
The C.R.I.S.P team went deeper with more detailed scoring and discussion, focusing on factors like budget realism, safety and ethics, and longer-term potential.
Finally, the wider community made the final call through live finale voting and social polls.
Together, these groups turned a raw flood of ideas into a focused shortlist and, ultimately, three community-backed winners.
On the front end, the Open Ideation Challenge was intentionally lightweight: submit a short idea that clearly states the problem and the AI-driven solution.
Behind the scenes, the work was more complex. Submissions had to be tracked and categorized, duplicates and overlapping themes had to be identified, and ideas had to be mapped against Deep Funding’s mission, current RFPs, and future directions.
This is a key reason a dedicated ideation portal is now being developed: to streamline backend processes, detect duplicates earlier, and compare new ideas with existing proposals more efficiently.
The first filter was Pre-C.R.I.S.P, which used a small set of yes or no criteria to decide which ideas deserved deeper review. Reviewers looked for a clear problem statement, basic feasibility, alignment with Deep Funding’s vision, innovation, and potential impact or relevance.
Each idea also received a star or no-star marker as a lightweight intuition signal.
The challenges here were practical and human. Reviewing roughly 140 to 150 ideas quickly but thoughtfully pushed the limits of a simple yes or no system. Some ideas were beautifully written but weak in feasibility, while others were rough in presentation but contained a strong, unique spark. Different reviewers also had slightly different thresholds for what counted as feasible or aligned, which required active calibration and conversation.
Ideas that survived Pre-C.R.I.S.P were re-examined by the C.R.I.S.P team using more detailed criteria. This included budget considerations, safety and ethics, and revenue potential or long-term value creation.
At this stage, the team was not just asking whether an idea was good. The questions became: Is it responsible? Is it implementable within realistic constraints? Does it fit the ecosystem right now?
This stage introduced its own challenges. One was alignment and learning across teams: both Pre-C.R.I.S.P and C.R.I.S.P independently reviewed the same large set of ideas so outcomes could be compared, and differences could become learning signals rather than confusion. Another was avoiding score blindness. Numerical ratings help structure decisions, but they can miss context and long-term potential if they are used without discussion. The team also had to balance ambition and realism, especially for visionary ideas that might require phased execution, partnerships, or a different framing to become fundable.
To prevent promising ideas from falling through the cracks due to rigid thresholds, C.R.I.S.P introduced a Golden Ticket system.
Any Pre-C.R.I.S.P or C.R.I.S.P member could nominate an idea to be fast-tracked directly to the community vote. For Pre-C.R.I.S.P Golden Tickets, at least 2 out of 5 C.R.I.S.P members had to agree, adding checks and balances.
This mechanism helped recover ideas that sat just below cutoffs but were clearly mission-aligned, explored new directions for the ecosystem, or strongly matched reviewer intuition and community needs.
After months of review, calibration, and iteration, the challenge arrived at its final shape: from around +140 ideas to 9 finalists, and then 3 winners selected with strong community input.
A live finale event was scheduled for 4 December 2025 at 18:30 UTC, with a Zoom event and community polls on X and LinkedIn to widen participation.
The public preview featured seven leading ideas, including MeTTa-Based Decentralized Workflow Orchestrator, AI-Powered Anti-Corruption Watchdog, Refugee Care AI, AI Uncertainty Navigator, Deep Funding University Network, AGI-Powered Adaptive Tutor and On-Chain Credentialing Framework, and AwareData: Algorithm and Data Auditing Service.
Organizing the finale brought familiar coordination challenges: working across time zones, designing a format that balanced storytelling with comparable information for voters, and ensuring links, Zoom access, and communication channels worked smoothly across Deep Communities, deepfunding.ai, X, and LinkedIn.
Even before every follow-up step is complete, the outcomes are already clear.
First, the Open Ideation Challenge validated a full pathway from idea to community vote. Community submits ideas, Pre-C.R.I.S.P screens for alignment and feasibility, C.R.I.S.P evaluates more deeply, Golden Tickets rescue overlooked ideas, and a shortlist goes to live community voting. This is now a repeatable pattern that can be refined and reused.
Second, it established real funding for real ideas. The winners unlock $1,000 for being selected through community voting, plus another $1,000 if the ideas graduate into full RFPs. This creates a tangible bridge from contribution to specification to funded work.
Third, it generated clear lessons that are already shaping what comes next. Yes or no scoring is useful for an initial filter but too blunt on its own, which is why future cycles will push toward more granular scoring and sub-categories. Infrastructure has to scale with ambition, reinforcing the need for a stronger portal that supports deduplication and comparisons across challenges. Rotation also matters. C.R.I.S.P membership rotates every 3 to 6 months to keep the process open and community-driven, with new members joining as others step back.
If you want to plug in, there are three simple next steps.
Start with Esther Galfalvi’s “C.R.I.S.P Update & Event Announcement” to understand the rationale, the design decisions, and the long-term vision behind the process you have now seen in action.
Join the community and follow the projects. Head to the Community page, create an account, and use it to stay updated on events and challenges, follow Open Ideation projects as they develop, join Circles and WorkGroups, and participate in voting and discussion.
Visit Now: Community
The first Open Ideation Challenge was more than a one-off experiment. It was a proof of concept that decentralized ideation works when the community is trusted with real power.
C.R.I.S.P is still evolving. The tools will get better, and the criteria will get sharper, but the core idea remains the same: bold ideas from the community, evaluated transparently, funded fairly, and built together.
Copyright 2025 © Theme Created By DeepFunding, All Rights Reserved.