A few months ago, OpenAI opened the GPT Store, an online marketplace where users could build and list chatbots powered by AI that were tailored to do various jobs (including coding and trivia questions). Yes, the GPT Store is a tremendous tool. However, in order to utilize it, you have to employ OpenAI’s models only, which some users and chatbot producers find objectionable.
Thus, entrepreneurs are developing substitutes.
One, called FlowGPT, seeks to serve as a kind of “app store” for GenAI models, including DALL-E 3 from OpenAI, Claude from Anthropic, Gemini from Google, and Llama 2 and Meta from Meta, together with front-end experiences (think text fields and prompt suggestions) for those models. Users can create their own GenAI-powered apps with FlowGPT and publish them to the public, receiving tips for their work.
Lifan Wang, a former engineering manager at Amazon, and Jay Dang, a computer science dropout from UC Berkeley, co-founded FlowGPT last year with the goal of developing a platform that would enable users to quickly create and share GenAI products.
“Users still have to go through a learning curve to use AI,” Dang said in an email interview with TechCrunch. “With every iteration, FlowGPT lowers the bar, making it more accessible.”
Dang refers to FlowGPT as a GenAI-powered app “ecosystem”—a grouping of developer tools and infrastructure connected to a marketplace and user base of GenAI apps. Users are presented with a feed of recommended apps and app collections based on popular categories (e.g., “Creative,” “Programming,” “Game,” “Academic”), and authors have the ability to alter the design and behavior of GenAI apps.
On FlowGPT, users engage with GenAI apps using a chat window that looks a lot like ChatGPT. Users can exchange links to chats, fill in prompts, thumbs up or down for apps, and tip specific app developers. Every app includes a description from the developer, along with information about when it was made, how many times it has been used, and which model the developer suggests using to power it.
The reason I say “model the creator recommends” is that FlowGPT apps are essentially just prompts that prime models to react in a certain manner. For instance, ChatGPT is instructed to narrate a horror story about a single scared girl via the “Scared Girl from Horror Movie” app, as the title suggests. ChatGPT is prompted by “TitleTuner” to tune headlines for improved search engine rankings. Additionally, SchoolGPT uses ChatGPT to provide detailed answers to arithmetic, physics, and chemical issues.
You’ll see that ChatGPT is heavily used. If you use FlowGPT for an extended period of time, you will also discover that many of the prompts stop working when the default model is changed.
Occasionally, the chosen model may not have the necessary features. At other times, the prompt encounters protections and filters built into the model.
Regarding safety measures…
In essence, several of FlowGPT’s most well-liked apps are jailbreaks made to get beyond the security features of the models. Many iterations of DAN are available on the market, and “DAN” is a well-liked prompting technique that allows models to react to cues outside of their normal norms. There are also apps like WormGPT that claim to be able to code malware (and connect to more advanced, paid versions of the chatbot on the dark web) and dating simulators that violate OpenAI’s policy prohibiting the promotion of romantic relationships.
Many of these apps, such as those that promote themselves as reliable health resources and those that are therapeutic, have the potential to be harmful. One study found that an older version of ChatGPT rarely gave referrals to particular options for aid relating to suicide, addiction, and sexual assault. GenAI models like ChatGPT are notoriously lousy at providing health advice.
The community manager of FlowGPT can receive reports of any offending app, such as those that advise users on how to create deepfake nudes using an AI picture generator (many of these exist), and they will be reviewed. Additionally, FlowGPT provides a toggle for “sensitive content.”
However, it’s evident from the webpage that FlowGPT has a moderation issue. The toggle is so poor that I hardly notice a difference in the apps I choose when it is turned on. It’s the Wild West of GenAI apps.
Dang categorically affirms that FlowGPT is a morally responsible platform that abides by the law, with risk-reduction measures in place with the goal of “ensuring] public safety.”
He declared, “We’re proactively engaging with top authorities in the field of AI ethics.” “Developing comprehensive strategies to minimize risks associated with AI deployment is the main focus of our collaboration.”
The company needs to do some work, in my opinion, given that this writer was able to obtain instructions for stealing a bank and selling drugs using the FlowGPT app.
Investors appear to think differently.
This week, DCM, an existing backer of FlowGPT, participated in a $10 million “pre-Series A” round led by Goodwater. In an email to TechCrunch, Goodwater partner Coddy Johnson stated that he believes FlowGPT is “helping to lead the way” in GenAI by providing “the widest choice” and “the most flexibility and freedom” to creators as well as users.
“We think open ecosystems hold the greatest promise for AI,” Johnson continued. “With FlowGPT, creators can select their models and work together with their communities.”
Not all of the model maintainers that FlowGPT is tapping, especially the ones who have promised to prioritize AI safety, seem to be as excited about this as I am.
Nevertheless, FlowGPT, which isn’t yet profitable, is setting the stage for growth in lieu of consequences from those vendors (at least as of the time of publication). According to Dang, the company is expanding its 10-person team in Berkeley by hiring more members, developing a revenue-sharing plan for app developers, and beta testing apps for iOS and Android that will update the FlowGPT experience on mobile devices.
“We’ve already shown we are on the right track with millions of monthly users and a fast growth rate, and we believe it’s time to accelerate the progress,” he said. “We are setting a new standard for immersion in
AI-driven environments, offering a world where creativity knows no bounds… [Our mission remains to cultivate a more open and creator-focused platform.”
We’ll see how that develops.