We all have ideas, but communicating them effectively and engaging people is not easy. So how can we most effectively achieve this in an age of information overload and diminishing attention spans?
If you're an engineer, Pramod Sharma or Jerome Scholler, you're using Napkin, a new “visual AI” platform they built together that emerged from stealth today with $10 million in funding from Accel and CRV.
Napkin was born out of Sharma and Scholler's frustration with the endless documents and presentation decks that have become commonplace in the corporate world. Prior to starting Napkin, Sharma, a former Googler, founded educational gaming company Osmo. Scholler was on Osmo's founding team and previously worked at Ubisoft, LucasArts and Google.
“Napkin's flagship product is targeted at marketers, content creators, engineers and professionals involved in selling ideas and creating content,” Sharma told TechCrunch. “The goal is to minimize the time and effort in the design process by making it a generative flow.”
“Generative” stands for generative AI, and yes, Napkin joins a long list of companies betting on the technology's potential, but there are a few things that make the experience, which is currently entirely web-based, stand out.
With Napkin, users start with some text, like a presentation, outline, or other similar document, or let the app generate text from a prompt (such as “Outline of best practices for job interviews.”) Napkin then creates a Notion-like canvas with that text, adding “spark icons” to paragraphs of text that, when clicked, transform the text into a customizable visual.
These visuals are not limited to images, but span a variety of styles, including flowcharts, graphs, infographics, Venn diagrams, and decision trees. Each of these images includes an icon that you can swap out for another image in Napkin's gallery, as well as connectors that let you visually link two or more concepts. Colors and fonts are editable, and Napkin provides “decorators,” such as highlights and underlines, to help you shape the look of any element.
Your finished visual can be exported as a PNG, PDF, SVG file, or as a URL that links to the canvas it was created on.
“Unlike existing tools that add a generative component to an existing editor, we're focused on a generation-first experience, where editing is added to complement generation, not the other way around,” Sharma said.
I gave Napkin a quick test run to see what it could do.
During the documentation phase, out of morbid curiosity, I tried to get Napkin to generate something controversial, like “instructions for killing someone” or “a list of highly offensive insults.” The AI Napkin was using wouldn't tell me how to kill someone, but it did comply with the latter request, with the caveat that the insults were for “educational purposes.” (There is a button on the Canvas screen to report this kind of AI misbehavior.)
Just for fun, I tossed a TechCrunch article onto a Napkin — a draft of this article, to be exact — and it quickly became clear where the Napkin's strengths and weaknesses lay.
Napkin works best with stories that have simple descriptions, rough ideas, and clearly established timelines. Simply put, if you feel an idea can be better explained with a visual, Napkin will almost always deliver.
Image credit: Napkin
If the text is a bit more vague, Napkin might try too hard and produce visuals that aren't based on the text at all. For example, take a look at this one, which is pretty much meaningless:
Image credit: Napkin
In the diagram below, Napkin has produced completely unfounded pros and cons (as is common with generative models): Nowhere in this paragraph does it mention privacy issues or the learning curve of Napkin.
Image credit: Napkin
Napkin sometimes suggests images and artwork for visuals. I asked Sharma if users need to worry about copyrights on these, and he said that Napkin doesn't use any public or IP-protected data to generate the images. “This is done internally in Napkin, so users don't need to worry about the rights of the generated content,” he added.
Image credit: Napkin
I couldn't help but notice that Napkin's visuals all follow a fairly generic, homogenous design language. Some early users of Microsoft's generative AI features for PowerPoint described the software's results as “high school level,” and watching the Napkin demo, I couldn't help but be reminded of that comment.
That's not to suggest that some of these problems are unsolvable. After all, Napkin is still in its early stages. The platform plans to launch paid plans, but not anytime soon. And given the size of the team, resources are a bit limited. Currently, Los Altos-based Napkin has 10 people, but plans to grow to 15 by the end of the year.
Moreover, few would argue that Sharma and Scholer, who sold Osmo to Indian education technology giant Byju's for $120 million in 2019, aren't successful entrepreneurs. Accel's Rich Wong backed Napkin in part because he was impressed with Osmo's exit; Wong was also an early investor in Osmo.
“Jerome and Pramod have an uncanny ability to make things that are extremely difficult from a technical perspective easy for users,” Wong said in a statement. “As partners with their first company, Ozmo, we've watched them use Reflective AI to realize their vision for a new play movement, and we're thrilled to support this new chapter as Napkin brings visual AI to business storytelling.”
Sharma said the money from the $10 million funding will be used for product development and hiring AI engineers and graphic designers.
“All of our energy and resources are focused on enabling Napkin to generate the most relevant and compelling visuals based on text content,” he said. “There are endless ways to visualize and design, and we're investing capital into building this depth and improving the quality of our AI.”