Peer-to-peer hosting

The interesting part of Portrait hosting was not just that it used Waku. It was that the browser, the desktop node, the backend, and the contracts all had different jobs and different trust boundaries.

The frontend talked to the local hosting app over http://localhost:35927, the app generated and kept its own node key, the backend sponsored the onchain registration, and PortraitNodeRegistry made sure the binding between a Portrait and a node address still required real consent.


Localhost was the bridge

The web app did not try to embed the hosting node inside the browser. Instead it treated the desktop app as a local coprocessor with a tiny HTTP API.

In portrait-frontend, the shared node hooks point at one fixed address, http://localhost:35927, and use that bridge for whoami, generatesignature, login, host, unhost, and checkin.

export const macAppUrl = 'http://localhost:35927'

const response = await get<HostingDataResponse>(`${macAppUrl}/node/whoami`, {
  withCredentials: false,
})
typescript

Why 35927

If you are going to run a permanent browser-to-node bridge over localhost, it helps if the number is stable, memorable, and unlikely to collide with the usual local dev ports that are often already taken.

So I mapped PORTRAIT into the valid port range. The visual below starts with the ASCII spelling, then folds the name into a 16-bit port number with sha256("PORTRAIT") % 65536. That 65536 is the full size of the port space, meaning the 65,536 possible port values (0-65535) you combine with a network address to form a socket endpoint. The result lands on 35927, which is why the local bridge lives there.

$ derive-port PORTRAIT
>word="PORTRAIT"
.ascii=80 79 82 84 82 65 73 84
.hash=sha256("PORTRAIT")
.fit16=hash % 65536
.port=35927
localhost:35927

Localhost was product infrastructure

The frontend even treated the response shape from whoami as product state. A 200 meant the app was open and authenticated. A 403 meant the app was open but not yet logged in. A network failure meant there was no local app listening at all.

That is a small design choice, but it is what made the browser and the node feel like one product without collapsing them into one process.


Localhost was not wide open

The hosting app did expose an Express server, but it did not accept requests from arbitrary sites. It checked the request origin against FRONTEND_URL and explicitly enabled private-network access for the Portrait frontend.

So this was not a random localhost port hoping for the best. It was an intentional bridge from portrait.so into a trusted local process.

portrait.so
trusted frontend
localhost:35927
checks FRONTEND_URL
next()
allow headers + continue

The node created its own address

On first launch, portrait-hosting-app generated its own Ethereum keypair locally. The node address was not the browser wallet, not a Coinbase or Privy session, and not a backend-managed account.

That separation is the whole architecture. A Portrait owner authorises attachment of a node, but the node remains its own cryptographic actor with its own key material on disk.

const privateKey = ethers.Wallet.createRandom()
store.get('mnemonic') || store.set('mnemonic', privateKey.mnemonic.phrase)
store.get('ethereumPrivateKey') || store.set('ethereumPrivateKey', privateKey.privateKey)

const ethereumPrivateKey = store.get('ethereumPrivateKey')
const wallet = new ethers.Wallet(ethereumPrivateKey as string)
store.get('ethereumAddress') || store.set('ethereumAddress', wallet.address)
typescript

Registration was a three-party handshake

Registering a node was not a simple wallet transaction. Three actors had to agree: the Portrait owner in the web session, the local node that controlled the node address, and the backend delegate wallet that actually paid gas.

The browser first asked the desktop app to produce a signature for registerNodeToPortraitId. Then it sent nodeAddress, portraitId, deadline, and sig to the backend. The backend validated the session owner, the deadline window, duplicate registration, node-count limits, and the recovered signer before it queued the sponsored contract call.

const keccak256 = ethers.solidityPackedKeccak256(
  ['address', 'uint256', 'uint256'],
  [wallet().address, portraitId, deadline],
)

const message = await PortraitSigValidator.createMessage({
  action: 'registerNodeToPortraitId',
  target: 'PortraitNodeRegistry',
  targetType: 'Contract',
  version: 1,
  params: keccak256,
  expirationTime: deadline,
})

const sig = await wallet().signMessage(message)
typescript
  • The desktop app signed the registration proof locally with its own node key.
  • The backend returned an identifier so the browser could reconnect the app after the transaction completed.
  • The frontend waited on an SSE job stream before telling the desktop app that the node was really registered.

The backend was the submit layer

The hosting app did not submit the transaction itself. portrait-api-backend turned node registration into the same delegate queue used for other sponsored actions in Portrait. That kept nonce management, auth checks, and retries on the application side.

This is the subtle architecture point: the node proved consent, but the backend remained the gas-paying submitter.

const job = await queue.add('delegate/call', {
  contractName: 'PortraitNodeRegistry',
  methodName: 'registerNodeToPortraitId',
  args: [nodeAddress, portraitId, deadline, sig],
})
typescript

The contract enforced dual consent

Onchain, PortraitNodeRegistry required two different kinds of authority. msg.sender had to be an owner or delegate of the Portrait, and the supplied signature had to come from the node address being attached.

That made the registration model much better than "backend knows best." The sponsor could submit the transaction, but it could not invent either side of the consent model.

if (!portraitAccessRegistry.isDelegateOrOwnerOfPortraitId(portraitId, msg.sender)) {
  revert Unauthorized();
}

bool isValidSig = _verifyRegisterNodeToPortraitIdProof(
  nodeAddress,
  portraitId,
  deadline,
  sig
);

if (!isValidSig) revert InvalidSignature();
solidity

Finality created the local session

The browser did not immediately mark the desktop app as connected. It waited for the delegate-call event stream and transaction receipt, then called localhost /node/login with the portraitId and backend-generated identifier.

Inside the desktop app, login paused briefly and then checked PortraitNodeRegistry.hasRegisteredNode(nodeAddress, portraitId) before storing the current username and portrait ID. The local session was derived from chain state, not optimistic UI.

There was even a recovery path for already-registered nodes. If the backend saw the node was already attached, it returned the existing identifier, and the frontend could skip straight to local login without sending another transaction.


Where peer-to-peer actually began

Registration only attached a node address to a Portrait. The actual peer-to-peer behavior started later when the user clicked host or unhost from the frontend. Those actions also went over localhost to the desktop app.

The app updated subscribedPortraits, subscribed to Waku content topics for those Portraits, and began processing latest and updates topics for them. The backend stored check-ins and hosted-portrait mappings, but the data plane lived in the node and the Waku network.

subscribedPortraits.push(portraitId)
store.set('subscribedPortraits', subscribedPortraits)

await processStoreMessagesFromContentTopic(getLatestPortraitContentTopic(portraitId))
await processStoreMessagesFromContentTopic(postUpdateContentTopic(portraitId))
await subscribeToContentTopic(postUpdateContentTopic(portraitId))
typescript

Browser Waku demo

The real Portrait system used Waku, so that demo should stay Waku. The public network is quiet enough that showing a fake message stream would be misleading, but peer connectivity is still real and still useful.

The panel below boots a throwaway browser Waku light node from a CDN build, waits for peers from the same public bootstrap set the backend used, and then streams the peer IDs it actually connects to. That keeps the artifact honest: it shows the network layer that is actually alive without pretending there is a healthy shared public data stream.

connected peers0

The backend was control plane, not the network

The API backend still mattered after registration. It stored HostingNode documents keyed by nodeAddress with per-device mappings for portraitId, identifier, deviceName, hostedPortraits, lastCheckIn, state, and location.

That let Portrait answer coordination questions like who is currently hosting a profile or whether a node has checked in recently. But the backend was not the host. The desktop app held the node key, and Waku carried the peer-to-peer traffic.

  • Browser UI: discover the local app, request signatures, and trigger host or unhost actions.
  • Desktop app: own the node key, join Waku, and keep hosting state locally.
  • Backend: validate requests, sponsor registry writes, and index node activity.
  • Contracts: bind node addresses to Portrait IDs with explicit authorisation.

Lessons

If you want a browser UI to control a local peer, localhost is not a hack. It is a clean trust boundary if you lock the origin, keep the API narrow, and let chain state arbitrate the important transitions.

One practical learning was that browser policy becomes part of the product. Brave explicitly blocks requests from public sites to localhost resources by default, which means a design like https://portrait.so talking to http://localhost:35927 is also a browser-compatibility problem, not just an application architecture problem.

The part I like most is that neither side got absolute power. The desktop node could not attach itself to any Portrait without the owner-side transaction path, and the backend could not attach any node without a signature from the node's own key.

Acknowledgements

Portrait Protocol becomes the foundation of Open Internet Protocol with a broader vision for decentralized social applications.

Footnotes

  1. The fixed bridge is literal in the code: EXPRESS_PORT = 35927 in the hosting app, and macAppUrl = "http://localhost:35927" in the frontend hooks. The number also matches sha256("PORTRAIT") % 65536.

  2. The hosting app sets Access-Control-Allow-Private-Network: true and rejects origins that do not match FRONTEND_URL, so localhost is treated as a private app bridge rather than a public endpoint.

  3. Brave documents a stricter stance than most browsers here: it blocks requests from public sites to localhost resources by default and uses a dedicated Localhost Access permission for trusted exceptions. See Brave's localhost permission writeup and the related implementation issue.

  4. On first launch the app generates a wallet with ethers.Wallet.createRandom() and stores the mnemonic, private key, and derived address in electron-store. The file itself notes that this is obfuscation, not serious key security.

  5. Backend validation for /node/add checks the authenticated owner of the Portrait, a short deadline window, duplicate registration, node-count limits, and whether the recovered signer from the supplied signature matches the node address.

  6. The hosting app uses explicit Waku topics like /portrait_test/1/latest-${portraitId}/proto, /portrait_test/1/updates-${portraitId}/proto, and /portrait_test/1/ping-all/proto.

  7. Waku's own docs position it as a service network and turnkey solution built on libp2p, and the same comparison page calls out the extra messaging layer Waku adds: Relay, Store, Filter, Light Push, content-topic routing, support for resource-limited devices, and RLN-based spam protection. See Comparing Waku and libp2p.

  8. The Waku team has also described Waku as the successor to Whisper, the early Ethereum peer-to-peer messaging protocol. In its January 2024 ecosystem roundup, the team explicitly frames Waku as the successor to Whisper and notes Whisper's early Ethereum origins. See January 2024 Waku Ecosystem Round-up. For libp2p's broader relevance, see Who uses libp2p? and Ethereum's own Networking layer.

  9. Varun Srinivasan's Farcaster note models how expensive full hubs become if every hub stores the whole network state as the network scales. See Decentralization of Hubs.

  10. The Open Internet Protocol site describes OIP as an open-source decentralized protocol for scalable, censorship-resistant, privacy-preserving social applications, combining Ethereum smart contracts with an offchain peer-to-peer network of relay and edge nodes. It specifically describes relay nodes as the data distribution layer and edge nodes as local caches that can re-broadcast data later. See openinternetprotocol.com.