The following profile is the first post in our three part series on tech and data for local governance
Part 1: How Hamlet makes public data useful
Part 2: Interview: Pablo Sepúlveda, founder and CEO at zlvas
Part 3: The future of urban data
Once upon a time when I worked at Lyft, my friend on the government relations team walked over to my desk and said, “Hey, guess what I’m doing today?” Turns out he was about to deliver data to the San Francisco MTA — by hand, on a thumb drive.
This was 2016. Fast forward to today and municipal data infrastructure is still a mess. As often as not, city governments share information via spreadsheets attached to emails or published as text on a website. Suffice it to say, municipalities struggle to create, ingest, organize, and disseminate data. This is what we’ll be discussing over the next three posts in our series on government, tech, and data.
For today’s post, we’re starting with Bay Area-based Hamlet — a company making it easier to understand what’s going on in public hearings all across California. I was fortunate to get a warm intro to the founder, so all the information is straight from the source. Hope y’all enjoy.
Making public meetings make sense
Imagine walking into a council meeting. There's some issue you care enough about to physically show up at city hall (let’s say an upzoning that would let you add a rental unit over your garage). Although there’s an agenda, you can’t count on the part you’re there for to happen at a specific time. So, you’ll just have to wait and hope that your city council actually gets to it in the next 4 hours (sometimes they don’t).
That’s the price of participating in local democracy and it’s the problem Hamlet’s founder and CEO, Sunil Rajaraman, is setting out to solve.
Sunil started building Hamlet after running for city council in Orinda (about 15 miles northeast of San Francisco). Sunil decided to attend public hearings to better understand the ins and outs of his local government and quickly ran into a wall of incomprehensible jargon. Turns out, discussions about Orinda’s obligations under the Regional Housing Needs Allocation process are less than easy to understand.
After he started using ChatGPT to create summaries and understand the minutiae under discussion, he recognized that he probably wasn’t the only one who’d benefit from an easier way to follow and understand council meetings. Fast forward to COVID, and public hearings are increasingly online and/or just recorded. This provided the type of unstructured data that LLMs are good at summarizing and enabled Hamlet to start making the day-to-day business of local government scrutable at scale.
Hamlet collects public information from municipal websites and platforms like YouTube. The company also generates its own transcripts from raw recordings since municipalities often use less than cutting-edge transcription software.
Once Hamlet has data ingestion set up for a city, they provide three core services: agenda monitoring, project tracking, and full-text search of meeting recordings. Without a product like Hamlet, humans have to sit in meetings and take notes or otherwise individually watch entire hearings, sifting through hours of video to find the relevant information.
See below for a walk through of Hamlet’s search feature. The clip takes starts with a city and shows how users can start with a keyword and navigate to a record of a public hearing complete with video clip and timestamped transcript.
For now, Hamlet’s paying customers are real estate developers and retailers who’d otherwise have to send staff to sit through meetings or pay local political consultants to do the same. That said, Sunil is civic-minded and wants to use the revenue-generating part of the business to cross-subsidize information in the public interest.
As a YIMBY Action member, I can already see how Hamlet will be helpful for pro-housing advocacy. YIMBYs have a major challenge in making sure local governments actually follow state level laws after they’re passed. Hamlet will make the monitoring and enforcement part of the job so much easier. Local journalism could benefit as well. Before, a reporter might have had to sit through hours of public hearings, taking notes, and hoping something newsworthy would happen. Hamlet makes that exercise exponentially easier, so maybe it will help breathe a little life back into local reporting.
In many ways, Hamlet is following in the footsteps of other startups. In place of deep reform, though, the play has been to build a product layer that abstracts away the complexity of the underlying legacy system. Stripe is the classic example. They make it easy for companies to set up payment processing, not by reforming the entire consumer payments ecosystem (probably impossible), but by handling all the hard stuff on behalf of their customers. For non-tech folks, if you ever used Turbo Tax (or just had a good tax person), same idea; an additional layer between you and an overly complicated legacy system helps simplify things for you as a user. Hamlet plays a similar role, just with public hearings and local government.
Democratizing Democracy
Historically, public hearings have institutionalized power asymmetries. These are long meetings at inconvenient hours, so selection effects ensure they’re populated by the least representative people in a community.
While Hamlet doesn’t remove the overhead of needing to participate in a meeting, it substantially lowers the bar for keeping track of what’s going on. Once upon a time that meant either attending yourself or being on a group chat with someone texting live updates.
Even in a world with published agendas, Hamlet serves an important purpose. One of the first things you learn as a product person (my character class in the great roleplaying game of capitalism) is that data likes to lie. It’s often wrong, incomplete, misleading, or missing altogether. In the case of Hamlet, they’ve come across at least one example of a disgruntled city employee deleting a bunch of meeting agendas on their way out. Having indexed, searchable data based on source-of-truth recordings provides important redundancy.
Important decisions are taken every day in public hearings across the country. To the extent that structural barriers bar people from meaningful participation, getting information out of those meetings is an important task. But it’s not just about getting information out; cities are often publishing transcripts and posting video, after all. It’s about making the raw information actionable that really matters, and that’s an extra step entirely.
Transparency to Legibility
We talk a lot about transparency in government. Transparency is good and necessary. It’s also not enough.
Local governments are often quite transparent. Budgets are published, votes are recorded, and public hearings are, well, public. Most information is available in one form or another; that's the government being transparent.
But just because municipal information is available, doesn’t mean it’s intelligible. Hours of recorded planning commission meetings aren’t automatically useful. That requires an additional step of processing — either by a human intern or via Hamlet.
Just as a pile of bricks doesn’t automatically provide shelter, terabytes of raw data aren’t immediately useful to a citizenry trying to monitor their government. Solving that last-mile problem gets us from transparency (with overwhelming amounts of information) to legibility (where we understand what’s actually going on and can take action).
Next week, we’ll look at a company making a different type of government data intelligible — namely, New York City’s ~4,000-page zoning code that dictates what can go where and for what reasons all across the city. We’ll sit down with zlvas co-founder and CEO Pablo Sepúlveda as he walks us through what it takes to build a digital representation of a city’s zoning regime and why it’s so important to make zoning intelligible to normal people.
But Jeff, what will the public meeting fanatics do if we no longer need to post threads of the meetings we watch online? These AI companies will take everything from us. We’ll be forced to spend more nights with our friends and loved ones like normal people
I found my locality was posting videos but the transcript was embedded as a VTT file in the page source rather than, you know, just easily downloadable. Then I found that the VTT content was really shittily constructed with heaps of duplicate text and sometimes it would be completely corrupted with timestamp overlap. So I built a VTT parser that will export just clean text with timestamps that I then load into a Claude project that's got a fine tuned prompt to summarize and catch any key phrases based on my areas of interest. But like, why the hell should I have to do this myself? Why gatekeep this? Only because someone is worried that the AI generated summaries won't be completely accurate? Like, what human generated summaries are guaranteed to be accurate? The bureaucratic world turns slowly.