Portrait AI

In early 2023 we built a feature that generated a full personal website from an Ethereum address. One wallet, one click, one portrait.

To access it, users minted an NFT on Zora. It started free, mints flooded in within the first minute, and we gradually raised the price from $5 to $50 to slow demand and cover the cost of generation.

talk.mp4IPFS þing 2023, Brussels

The idea

Portrait was a no-code decentralized website builder. You mapped a site to your ENS name, published to IPFS, and edited it visually. The weak point was onboarding. Even with good tools, making a personal site still means staring at an empty page and deciding what to say about yourself.

So we tried the opposite. Skip the form. Skip the prompt box. Skip the setup flow. Give us an Ethereum address and we would build the site for you.


Fitting it into GPT 3.5

Yes, we used GPT 3.5. That matters because this was early 2023, before LLM products could casually browse the internet, call tools, or behave like agents the way people expect now. If you wanted live outside context, you had to fetch it yourself, clean it yourself, and compress it yourself.

That was a big part of what made Portrait AI interesting at the time. The user gave almost no input. In practice, the system had to do the agent-like work itself: gather live internet and onchain context from many sources, resolve conflicts, decide what mattered, and then feed a narrowed version of reality into a model that had a strict token budget.

  • Constraint: a 4,096-token context window, which meant there was no room for naive raw payload dumps.
  • Prompt budgeting: the backend counted tokens with tiktoken and aimed for roughly 3,650 prompt tokens so the model still had room to answer.
  • Hard trimming: if the prompt was still too large, the query was recursively shortened until it fit.
  • POAP reduction: capped to six most recent events, with each description cut to its first sentence.
  • Website reduction: page content was summarized externally into roughly 100 to 300 characters instead of sending raw pages.
  • NFT reduction: holdings were aggregated into collections, sorted by count, capped to the most meaningful groups, then collapsed further when needed.
  • Architecture consequence: instead of one giant prompt, we split the workflow into several smaller model calls for identity inference, loading-flow description, hero copy, heading copy, and section copy.
  • Product consequence: the novelty was not just that a model wrote the site. It was that the user barely had to tell it anything. The system went out, assembled the context itself, and returned a coherent homepage from an Ethereum address.

What happened after you clicked generate

The frontend looked conversational, but the system behind it was a queue worker. You minted the NFT, connected your wallet, chose a few personality tags, then clicked generate. The app posted a job to the backend, opened a live event stream, and started rendering AI messages in the modal while the actual work happened in the background.

The actual flow was precise. The client posted to /ai/generate, the backend validated NFT ownership and rate limits, added a BullMQ job, and returned a job id. The frontend then subscribed to /events/job/ai/:jobId over server-sent events and appended each message event to a typewriter-style chat log until the backend emitted completed.

onchain
<--->
request ---->
site <----
prompt ---->
reply <----
OpenAI
User requests /ai/generate

Where the data came from

The backend treated a wallet like a primary key for a person. It fetched ENS records through Alchemy and resolver calls, Lens profiles, OpenSea and Rarible data, Twitter identity, Farcaster matches, Unstoppable domains, POAP attendance, NFT holdings, website links, and summaries of those websites. All of it got folded into one temporary profile.

The important insight was not that the data was private. It was the opposite. The data was public but fragmented, and GPT made it feel singular. Once everything was pulled behind one Ethereum address, it was often surprising how much the model could infer about the person.


The full API surface

The interesting part was not just that we called a lot of APIs. It was that we mapped all of them back to one wallet and treated that address like the join key for a person.

0x1a2b...f9e0
ENS
Lens
OpenSea
Rarible
Farcaster
Unstoppable Domains
POAP
Alchemy NFTs
Transfer history
all fetched in parallel for one wallet address
ENS
Lens
Rarible
Get Twitter username
X Profile
ENS
Lens
Rarible
X Profile
Resolve website URL
Summary (TLDRThis)
derived lookups after the first pass
  • ENS via Alchemy and resolver calls: name, avatar, website, email, description, notice, keywords, social handles, and multiple chain addresses.
  • Lens: default profile, bio, handle, profile picture, cover image, custom attributes, and profile statistics.
  • OpenSea: user profile data, username, and marketplace-facing profile imagery.
  • Rarible: wallet user profile, description, cover media, image media, username, short URL, website, and Twitter hint.
  • Twitter API v2: resolved from ENS, Lens, or Rarible clues, then fetched for name, username, description, URL entities, location, profile image, and verification metadata.
  • Farcaster via Searchcaster: additional username and display-name matches tied to the wallet address.
  • Unstoppable Domains: extra domain identity attached to the same owner address.
  • POAP: recent event attendance, trimmed down before prompting so it stayed useful instead of bloated.
  • Alchemy NFT ownership: grouped collections, token names, thumbnails, and a reduced gallery view of what the wallet held.
  • Website discovery plus summarization: URLs pulled from ENS, Lens, Twitter, and Rarible, normalized, deduplicated, blacklisted when they were just social or marketplace links, then summarized before going anywhere near GPT.
  • Alchemy transfer history: checked against known FTX addresses and used only as a side-channel joke in the live chat flow.

What was actually saved

The generated site was not a screenshot or a one-off HTML blob. The backend assembled a structured portrait object with metadata, portraitData, and appearance settings. portraitData held the actual Craft node tree that powered the page. metadata kept the selected keywords. Appearance settings derived a color palette and type classification from those keywords.

The saved content mixed deterministic assembly with generated copy. The hero used the inferred identity, ENS name or address, discovered avatar and cover imagery, and website buttons. Other sections pulled NFT collections into a gallery, surfaced socials and links into feature rows, added an OpenSea call-to-action, and inserted GPT-written titles and descriptions where the page needed a voice.


When the product talked back

Portrait AI had a personality. It did not just say 'loading.' It acted like a character. It greeted people, narrated what it was doing, and occasionally said things with more confidence than it should have had.

That personality mostly lived in the server-sent chat stream, not in the saved page itself. The modal messages were free to be dramatic, teasing, or weird because they disappeared once generation finished. That gave us room to make the waiting state feel deliberate instead of dead.

In one branch of the logic, if the wallet had interacted with known FTX addresses, the system would joke about it. That gave the delay a more personal touch instead of reading like generic AI slop.

gotu.eth
@nickvernij · March 31, 2023

Portrait AI bringing back traumas 😭 @portrait_gg

Portrait AI chat ridiculing a user about FTX transactions
Open on X

The privacy question

The demo worked too well. If you knew a public wallet address, the system could often identify the person, infer where they had been, summarize what they cared about, and turn that into a polished homepage. The individual facts were public. The assembly was what changed the feeling.

That was the real lesson. Onchain data does not stay harmless just because it is public. Once you combine identity, holdings, event attendance, and language models, you stop browsing artifacts and start generating dossiers.

Acknowledgements

Built in 2023, and remembered mostly for how clearly it showed both the power and the weirdness of early GPT products combined with onchain data. Deprecated in 2025 when Portrait pivoted to a more social direction, but it was a fun experiment that taught us a lot about how to build with AI and what the real user value looked like.

Footnotes

  1. Portrait AI used GPT 3.5 via the OpenAI API

  2. Access was gated by the Portrait — 01100001 01101001 NFT on Zora, contract 0x129c45d0359417517436920ffb3cf78e410daecd on Ethereum.

  3. The collection ultimately reached 5,888 minted and was used to control access to a generation flow that was expensive to run.

  4. FTX filed for bankruptcy in November 2022, five months before this talk.