Networked GPS and the Era of Precise Positioning

The Space Capital Podcast

|

November 21, 2023

In episode 2.19, we're speaking with Sean Gorman, the co-founder and CEO of Zephr, a company that is developing a novel network-based approach to augmenting GPS to focus on improving the accuracy and resiliency of GPS/GNSS for mobile phones and wearables.

In the newest episode of the second season of the Space Capital Podcast and we’re speaking with Sean Gorman, co-founder and CEO of Zephr, a company that is developing a novel network-based approach to augmenting GPS to focus on improving the accuracy and resiliency of GPS/GNSS for mobile phones and wearables.

EPISODE SUMMARY

Show notes

Learn more about the past, present and future of GPS and its impact on the world in today’s podcast.

Episode Transcript

DOWNLOAD TRANSCRIPT

40

:

Networked GPS and the Era of Precise Positioning

“And so the first thing when we started approaching this problem is like, "Hey, if we could just deliver purely a software, we can avoid all of those massive headaches that we saw, and all of these things that prevent technology diffusion at scale. If we can come up with an approach that provides the optimization and the error corrections to improve GPS purely as software, then that gets rid of a lot of these obstacles and opens up some immediate markets for us."

Chad Anderson:

Welcome to the Space Capital Podcast. I'm your host, Chad Anderson, founder and managing partner at Space Capital, a seed-stage venture capital firm, investing in the space economy. We're actively investing out of our third fund with a hundred million under management. You can find us on social media @SpaceCapital.

In this podcast, we explore what's happening at the cutting edge of the entrepreneurial space age, and speak to the founders and innovators at the forefront. This is the Space Capital Podcast, and today we're speaking with Sean Gorman, he's co-founder and CEO of Zephr. It's a company that's developing a novel network-based approach to augmenting GPS through the use of AI-based computational methods. Or to put it simply, they're focused on improving the accuracy and the resiliency of GPS for mobile phones and wearables. So we led Zephr a seed round a couple of months ago, and we were joined by a great syndicate of investors, like First Spark and Stanford Research Institute. So Sean, it's really great to have you on, thanks for taking some time to join us today.

Sean Gorman:

Oh, thanks for having me, and thanks for investing as well.

Chad Anderson:

Our pleasure. So to start things off, I'd like to talk a little bit about your background. I mean, the founding team at Zephr brings a lot of experience in data and mapping, you've got a long track record spanning a number of years, have had multiple successful exits. So you yourself have a PhD in mapping, you have a lot of relevant experience in computer vision mapping, AR, digital twins, and satellite imagery. So would love to just take a couple of minutes and explain your background, what has led you to where you are today. Also, I see that you had described yourself as a paleo map nerd trying to create an alternate reality, so would also love to hear what that means.

Sean Gorman:

Cool. Actually, we can kind of start there because it's somewhat relevant to the backstory as well. The first startup we did actually our CTO had written a book called Neo Geography for O'Reilly, with this concept that we're... Back in the 2007 timeframe with Google Maps coming out, that there was going to be this new frame of geography that was delivered over the web, and eventually later to mobile devices. And I'd come from a background of traditional geography, where that was my academic background, and more from a computational statistical side of things. And so when we were working in the space, and neo geography came out, we started joking that we were paleo geographers, because we were the folks that had come out of more traditional geography, but we're very interested in how these new computational frameworks would change geography as we had known it from an academic background.

And it's probably a good place where the story really starts, that my intentions when this kind of all got going was being an academic. I was at George Mason University and working on a PhD, and ironically it was actually in a school of public policy, that was founded by two geographers from Johns Hopkins, and basically found that they could get more grant money as public policy folks than they could as geographers. And they had started a center for doing transportation simulation analysis, and George Mason is right outside Washington DC along the Beltway. If you've ever been to DC and driven in DC traffic, transportation policy is a big deal around there.

And so they got a ton of funding for doing these really high-end simulation and modeling things, and then applying the outcomes to policy. And that was fascinating to me. I had done my master's work in economic geography looking at the structure of the internet, which the late nineties was still kind of a nascent upcoming thing. And mapping out that geography of fiber optic lines, and seeing if it made a difference in economic growth and development, and what kind of policy implications that could have.

So them being a big infrastructure place, and my background being in doing that for statistical analysis of information networks seemed like a really cool place to go. And so I went over there working with a professor named Roger Stough, who was amazing and great, and he gave me a long leash to start working on things. But the interesting part of this is I enrolled in 2001 in August, and we did the traditional offsite to kick off a university onboarding. And two weeks later, 9/11 happened. And so that really shaped my direction in a big way. And largely it was an outcome on the heels of 9/11, there was a lot of interesting critical infrastructure, and the vulnerability or resiliency of that critical infrastructure, to both natural disasters and terrorist attacks. And George Mason got an earmark from Congress into the law school to study critical infrastructure.

And so the professors I was working with, Laurie Schintler and Raj Kulkarni, who were two really great computational folks, we were talking about the work I had done for my master's thesis. And I had this database I had built back then of fiber optic lines and cables, and where they ran, and the right of ways, getting down to a pretty high level of precision for that stuff. Relatively speaking, it was GIS data. And they said, "Hey, we should put in a proposal with the data you have from your master's work, and see if we can get some grant money." I said, "Oh, that'd be awesome." That's kind of the deal I had with my advisor, Dr. Stough, was that I had a year to come up with a research topic and see if I can find some funding for it. And if I couldn't find something, then I would get attached to an existing project.

And so this awesome opportunity, it's like, "Oh, Laurie, Raj and I can kind of drive our own destiny and put something together." So we sent in a proposal and we heard nothing, we're like, "Oh, it's probably too low level for a law school, or too techie." Then a couple of months later we got a call, and I guess during that time they were trying to find a director to run the center. And they found somebody that had been in government, and came out, and was going through their proposals, and came across ours, and called us up, and said, "Hey, can you come in and talk about your proposal? Is this simulated data or is it real data?" I'm like, "Oh no, it's real data." And they're like, "Can you come in tomorrow?" I'm like, "Sure, yeah, that sounds great. Maybe get some funding."

And to make a long story short, that I've probably given way too much preamble, basically came back and said, "I've been in the government for multiple years, and we've been trying to get this data aggregated, and everybody says it doesn't exist and it's not possible. But you all look like you've done it." And he said, "Could you come give some briefings around town for it?" And so it started this kind of dog and pony show, where we went to all the three letter agencies around DC and briefed them on this open data we had aggregated. And at the time Raj and Laurie come up with some really cool statistical techniques to analyze it, I was trying to learn to implement the code to work on it. We're kind of crashing into it, trying to put some real academic and scientific rigor around it. But it kind took off on a life of its own, and these government agencies got a little worked up. And then a Washington Post reporter heard that these grad student professors were making waves with this open data they had within the Intel community.

So they sent a reporter, Laura Blumenfeld, to spend three days with us, which was kind of shocking, because a grad student somebody was interested in, would've just been esoteric work before that. And then she spent the three days with us, and we walked her through everything we're doing. And then she disappeared, and we're saying, "Well, maybe there'll be a story at the back of the paper." And there was nothing for months. And so we kind of forgot about it and just kept on cranking away on our work. And we had gotten some grants at this point from the emerging department of Homeland Security, and they'd run contracts through Oak Ridge National Labs, with these great folks at the labs we were working for that were wonderful. And actually I had taken off to go to London, there's a place in London called the Centre for Advanced Spatial Analysis at the University College at London, that was doing a lot of the similar kind of mathematical and computational statistical modules I was really interested in.

And I had a good friend over there who ran the lab. And so we'd basically figure out funding to be able to go in and do work with each other. And so he had figured out some funding, and with the stuff we got from George Mason went over there and we were doing work. And while I was over there, the slow news day finally hit, and the article that Laura had written ended up on the front page of the Post. Since it was a slow news day and there was nothing else really to talk about, all of the TV shows started picking up on it. And so CNN sent a limo over to pick up me and my buddy and take us to the studios in London and do an interview with Wolf Blitzer, which was totally bizarre, and weird, and surreal at the time, especially for my lab buddy, we're both grad students.

Chad Anderson:

You always wonder, you hear all the time stories about how technology goes from academia and goes out and permeates out into the real world, and now we know firsthand that it does it in a limo.

Sean Gorman:

Yeah, apparently. And as somebody who'd never ridden in a limo, it was particularly wild. And it was a grad student who's living on a $14,000 a year stipend, it was extra cool, or just fascinating. So we do the interview, and then I fly back to the US. And by happenstance, one of the professors at the university had been an anchor at CNN previously, and so they kind of do a tag team. They give us some quick media training, and there was also some weird security concerns. So we actually had one of the CNN security people come in and brief us on how to be aware of our surroundings and so forth, which is also super weird and surreal. So we ended up doing this media tour of going to all the usual suspects, by this point it's 2003, that you would expect around something like this.

And so eventually after doing enough of these new shows, and magazine articles, and so forth, the university sent patent lawyers over to see if there was something they could patent, because a lot of people were calling up to the university asking if they could buy the technology. And we had some hacky MATLAB scripts, and some really raw C code, where you seem to run simulations, and a lot of PowerPoint slides to present it all. But said, "Hey, you can't license this off, it's half-baked." And so basically they gave us the decision of spin a company out or license the technology to a company. And there was a boat club that I wrote at in DC in Georgetown in Potomac, and one of the mentors there that I'd always really looked up to was a guy named Don Spero, who won the World Championships in rowing back in the sixties while he was doing a PhD and then spun a company out from it.

And so I was talking to him in the locker room about what we were going through, and he's like, "Hey, I went through something really similar, and I've been working at the University of Maryland." Had this place called the Dingman Center, that was a center for entrepreneurship, and they'd actually set up a VC fund within the university.

And he said, "Hey, I'm in the process of doing this. I've done lots of stuff with spinning technology out of the university. I'd be happy to get breakfast and just talk through it." And so he helped us out more and more, and did a great job of just pro bono helping us out with the university and the IP, along with my professors. And so long story short, we ended up spinning a company out, their venture fund invested along with some other venture funds, eventually In-Q-Tel came in. And we spun a company up and started running with it. And the first thing I said was, "I'm a grad student, I've done some tech jobs before I went back to grad school, but I'm not a CEO, I don't know how to do any of this stuff." So he said, "Okay, we'll get a headhunter and go recruit somebody to be a CEO." So we recruited a CEO, and started the company, and went to go fundraise.

Chad Anderson:

And fast-forward to today, and you're at the helm, you are founding your... How many is this, fourth?

Sean Gorman:

Fourth.

Chad Anderson:

Fourth company, and you're at the helm of it. And you have now done this and repeated this several times, you've built and sold multiple businesses to large companies in this space, and worked at those companies for a couple of years afterwards before going on and doing it again.

Sean Gorman:

Yeah, like we said, we tried to make new mistakes each time, and not make the same mistakes we made the last time. The first one, we made a lot of mistakes, and learned a lot along the way. And I think one of the most beneficial parts of that experience was building a really good team around that. I think we got up to a little over 40, 45 people or so. A lot of the key people that came into that became, not only lifelong friends, but also lifelong collaborators as well. And I think any story about these ventures in general is completely incomplete without Pramukta. Who was actually our first intern at GeoIQ, which was the startup that spun out of George Mason. And a lot of what I had used in my academic works was from statistical mechanics and physics, within condensed matter kind of work. But I kind of half knew it, half taught myself.

And Raj, the other professor, had a much better understanding of it and taught me what he knew, but neither of us were physicists. And then we came across Pramukta, who was getting his degree in physics, specifically in condensed matter, and stat mech was his strongest and favorite part. And so brought him on as an intern, and he eventually became the CTO of the company. And we've collaborated on every startup since then, he's been the CTO technical founder for all of those things. And it was great, because he actually knew statistical mechanics, which I'd been trying to teach myself for multiple years. So it was awesome to have somebody that was actually super well-versed in it. And just that physics background in general has been a really brilliant theoretical and practical framework for tackling a lot of fun problems across all the startups that we've done.

Chad Anderson:

And that's a really interesting and unique thing about your team also, is that you have been together, you have founded companies and exited those companies, multiple, together as the same founding team. Which has got to be, to go through, and be in the trenches, and to do that multiple times, and to still be good friends and be working together, is really, really I think a testament to the bond that you guys have. Your most recent company prior to Zephr that you sold was bought by Snap, I'm curious in particular how geospatial and GPS are being used at Snap.

Sean Gorman:

Yeah, it was really fascinating how that unfolded, and also in general, how geospatials used Snap, because when we went and joined Snap, the team was actually broken into two. And I went over to the maps team, and Pramukta went over to the augmented reality team, and both ended up being very heavy users of geospatial, but functionally in two opposite parts of the company. Both aspects are really illuminating for how big tech and social media in general uses Geospatial. Going over to the maps team, I think about three or four years before I had joined, maybe even a little bit more than that, Snap had put a hidden feature into the Snapchat app with a map. And they'd used map boxes, map tiles, and they had taken the location that folks could share, and then they could see where they and their friends are, so it was a live friend map. And that was a pretty unique thing within the social media space at the time, at least at the scale that Snap was doing it, it ended up being massively popular.

And so they ended up building a whole team around that maps product. And when I came on board, they were in the process of wanting to move off the third party mapping provider, because basically it had gotten big enough with 350 million users of just the maps, that they wanted to have control of the mapping platform themselves. But in the grand scheme of things, in general across the industry, AR and maps were kind of fusing together. And so I think there was general winds blowing in that direction. But specifically what I ended up doing was helping the folks there build the team out to replace that third party mapping provider. You need your whole mapping stack within that, you need roads and street, which OpenStreetMap does a fabulous job of providing, but you also need a satellite layer. And that was actually the first thing that I ended up working on, satellite and aerial imagery, and building out our own bespoke version of that.

But you also need land cover, because if you're like me, I didn't have a Snapchat account when we started working with them, so I was late to the game in understanding all things Snap. But the Snap map has a really vivid bright cartography, and they use land cover data to make that happen. So we needed to build our own land cover data set, and build that out from satellite imagery, so we used some cool sentinel imagery products to build out that land cover data. And they also had a terrain with hill shading, so there was a variety of things, both cartographic and pragmatically that needed to go into place to do that replacement. So it was a really fascinating project, because I had never worked in the consumer space with that many users before, or through mobile in that way.

Chad Anderson:

What was it, that during your time at Snap, that made you think that there was a gap in the market? That there was something... What did you see there that led you and Pramukta to say, "There is a company here, let's found Zephr."

Sean Gorman:

Yeah, I'd say the mapping side I just described was context of understanding that market and its mechanics from a production perspective better, but the actual driver for the idea came from Pramukta on the AR side. Whereas Pixelate, the company we had built, didn't really have a whole lot to do with most of the work that I was doing on the map side. It was really built around this concept of crowdsourcing a 3D map at city scale for powering a visual positioning system, and making augmented reality and autonomy viable at an economic trade-off from a unit cost perspective. Basically Apple and Google had both... Google first with Street View, and in their own airplanes, has a wonderful 3D map of the world, and they were able to turn that into a feature database to power things like Live View, so you could do live AR walking directions.

And Apple also having big AR goals, went and made a similar investment. They bought a company from Saab to power their look around cars, which is even a fancier version of Street View than what Google had. And so they've been using that same data to enable city scale AR interactions, but these are billions of dollars in assets, and data collection, and compute to make that all happen. So a lot of the other players that don't have those kinds of budgets, we're looking for more economical ways to go about doing that. Meta, Facebook, bought Mapillary, which was a crowdsource street view, to help power the work that they want to do, which was a really awesome team, and they've been doing fascinating stuff. And we were looking at how we could crowdsource using people's mobile devices to make this kind of happen, but the issue we ran into, and I think the issue a lot of people ran into, is when you're doing crowdsourcing off people's mobile phones, was that the GPS just isn't accurate enough.

You had too much drift, especially in urban areas, which are the places that you care about most for enabling AR experiences. The multi-path issues and other urban canyon challenges, just made the GPS data attached to these videos and photos too sketchy to be able to turn into viable models. We found if we had a GoPro and we had some telemetry, even basic telemetry on there, we can make it all work. And you can do a bunch of computational tricks and kind of get that phone data to work, but it's janky, and it's hard, and it's expensive. And when we were in the process of working on this problem, Pramukta kind of had the eureka moment of maybe seeing a path forward for it. We were doing some work for one of the big tech companies, and they wanted a ground truth data set to test against, for the visual positioning system that we were testing out for them.

And so we rented an RTK differential GPS, which was a survey grade GPS, that you see people on construction sites with, where usually there's a pole or a tripod with a big antenna. But what really makes that work well, is that there's a base station nearby, usually within 10, 15 kilometers, that sends an error correction to that. And so basically you have a super precise fixed signal, and then you have this rover with a really nice antenna, and then you can cancel out the error between the measurements of those two devices and come up with a one or two centimeter accurate survey measurement with these techniques. And so we had rented one of these things, and we were jumping through the hoops trying to make them to work. And you have to sign up ahead of time for a free base station if you don't want to pay somebody like Trimble thousands of dollars for network access.

But there's all these great NOAA CORS stations that are for free, but you have to register ahead of time, and it takes a week, and then you have to set up [inaudible 00:21:03] messages. There's a whole bunch of hoops to jump through to get these things to work. And we're fumbling around trying to get these things to go, and getting a little frustrated with it. And we're on a corporate campus, where before the pandemic there's hundreds of people around looking at their phones. And Pramukta looks around and says, "Why don't we just make everybody's phone a base station? And then we could get all those measurements and feed them into a model that could converge on reality."

Which was very similar to some of the statistical techniques we were using for the AR computer vision work at the time. And he says, "I think this similar kind of ensemble approach could work." We're in the middle of a project, and we talked about it over dinner, got more excited, and then we're like, "Hey, we got a project to deliver here, let's put this on the shelf and we'll come back to it." And so at the time it was like... Well, when we ended up getting acquired by Snap, we're like, "We bet Snap has this amazing computer vision team, I bet you they've solved it." It ends up that none of these big folks had really solved this particular problem. And so when we left Snap back in the fall of last year, we're kind of thinking about, what are we going to do? We're like, "Oh, the GPS idea, let's pick it up off the shelf and see if we can do something with it."

Chad Anderson:

This is an area that at Space Capital we've been focused on for several years. I mean, there's clearly a need in the market to improve the accuracy and the resiliency of GPS, particularly for spatial computing, and particularly in the urban canyons that you were mentioning. And we've looked at dozens of business plans, all of them addressing this problem from different angles. There's constellations of GNSS satellites, the obvious challenge with that approach is that it's very capital intensive, you need half a billion dollars to sort of get to minimum viable product. And then there's the other piece, that I don't think a lot of companies are thinking through, that with GPS today, it is reliable because there is an entire team dedicated in Colorado that is working the post to keep it secure 24 hours a day, seven days a week.

I don't know that a lot of startup companies understand what 99.999999% uptime really means, in terms of just operations and logistics, but also cost. So that's a really, really challenging approach. We've looked at relays, kind of like the solution that on construction sites and things that you were mentioning. People were talking about putting relays on light posts, and lampposts, and things within cities, and then they would help triangulate. But your approach has a number of compelling benefits that we didn't see in the others. And one is that it's software first, and therefore it's inherently scalable. So a very elegant software only solution. Can you walk us through your approach to this problem and how you see it going from here?

Sean Gorman:

Yeah. I mean, one of the issues that we ran into when we started thinking about this for the computer vision AR use case, was how can we get better hardware to work with to crowdsource these things? And pretty quickly we realized that there's just this massive bottleneck with trying to leverage advanced hardware in a crowdsourcing scenario, is that the device manufacturers need to adopt the new hardware, or get new firmware deployed onto the devices. That process of how you roll out, and how you deploy, and how much time that takes to get out to the masses and actually be implemented. The number of people who don't even upgrade their operating systems and are still running on the operating system that their phone came with is a problem.

And so if you think about trying to get not only people to adopt new technology or new firmware, and update their devices with that new technology. Or the handset manufacturers putting new GPS chips on them, or tethering these things to more advanced devices through the mobile phone. Basically all of them are hard, and all of them are difficult, both from a business, and a technical, and an operational standpoint.

So when we started thinking about the problem, the first thing that we wanted to avoid were all these headaches we saw when we were looking at tackling the problem at Snap, and when we had been Pixelate, of we had eventually gone to sending people GoPros for free, to be community ambassadors to collect data. We were so desperate to get people with higher tech collection mechanisms for us. And so the first thing when we started approaching this problem is like, "Hey, if we could just deliver purely a software, we can avoid all of those massive headaches that we saw, and all of these things that prevent technology diffusion at scale. If we can come up with an approach that provides the optimization and the error corrections to improve GPS purely as software, then that gets rid of a lot of these obstacles and opens up some immediate markets for us."

And then the other side of it that was really compelling for us selfishly, was that since it's a network-based approach to GNSS, is that the more devices that we can get on, the more data we have to work with, and the better we can improve our ensembles, and do our optimizations, and do our robust estimations and so forth. And so being able to deploy over software to mobile devices was kind of a natural hand in hand fit, that more data made us more better, to use bad English. And if we could do that over software, we avoided a bunch of these hardware headaches that we had run into previously.

Chad Anderson:

How significant is this improvement? I mean, how much more accurate can you get from using this networked approach, and what does this mean for people in their everyday lives? What benefits, what use cases do you see for ultra precise positioning?

Sean Gorman:

Yeah. I mean, we've had a great benefit in the process of developing this technology, that early on, January of this year, we teamed up with SRI, Stanford Research Group. And they have this amazing PNT group, which is positioning, navigation and timing, which covers GPS and other kind of location-based methods. And they have a really well-regarded simulation and benchmarking tool set that they use, and that work across a whole bunch of other folks. The technology from them has spun out into companies like ZONA and others. And so great reputation, well regarded, so we went in early to them. And SRI ended up investing, and as part of that, we were able to access, one, all of their PhDs, and two, all this great benchmarking and simulation capacity. And so to your accuracy question, that was the first thing we wanted to do, was to go through and run through every possible scenario we could think of to understand how the model works, where it falls apart, and how we make it more robust.

And so we've been doing that since January, and continue to iterate and improve, and throw harder and harder tests at the technology. And within those tests, our target was to be below 50 centimeters, and we found that with just three devices on the network, and those devices need to be roughly within 10 kilometers of each other, that we could be below 50 centimeters. If we had 10, 11, 12 devices within that 10 kilometer radius, we could be below 30 centimeters within those benchmarks that we're running with SRI. This is really important for kind of our unit cost and economics right now, how many devices did we need to be able to sample to get this error correction to broadcast out to everybody? If it required a hundred devices or a thousand devices, then that was going to be more computationally challenging, because the ensemble will be a lot bigger to get to that solution.

We'd need more compute infrastructure, and it was going to need to be more sophisticated. But the great thing with SRI, is we're able to get down to not only do better than what our target was, but be able to do that with a minimal number of phones, which really made us feel a lot better about the scalability, and also the amount of capital that we would need to be able to go to market with this. So that's going to the first question. And we're in the process of doing that field-testing with live devices now, and it's trending back towards those exact same targets being sub 50 centimeters. But when we launch this, we'll have great quantified metrics and results for those in the field with live mobile devices, as well as all the simulation stuff that we did with SRI.

Chad Anderson:

That's great. And so is it true, because it's network based, the more nodes you have online, the better and more accurate it can be?

Sean Gorman:

Yes, that's definitely the case. Although a certain point is diminishing returns, and that you're not improving the accuracy that much, but you're increasing your compute overhead by the number of signals you're jamming into the ensemble. And we generally found that trade off after 15 devices, the additional accuracy you gain is not worth the extra compute against it. That's nice, in that we can keep the compute low, and then still hit our accuracy and resiliency targets with the models.

Chad Anderson:

And then so where do you see this being used? Who do you think is going to be most excited about adopting this technology?

Sean Gorman:

Yeah. I mean, we've had some great conversations. It was funny when we were first discussing this and we had left Snap, we had done a lot of business planning around the AR use case that we knew really well. And I think like most people, when you leave a startup, you always have ideas of how it could have gone better. And I think I personally was fixated on, how could we have nailed this better and really done it? And then Pramukta reminded me, as we were going through that, he's like, "Hey, we sold early because we didn't know when the augmented reality market was going to become mainstream and really viable. And if anything, we know less now than we did when we sold the company." It ended up being a good bet, because we just didn't know when AR was going to be more than the Google, Apple, Facebook, Niantic, Snaps of the world.

And so in hindsight, that ended up being a good decision, because I think people are even less sure now of when that's going to happen. I think people are still confident that there's a big market there, but when we're going to overcome all of the technical problems with getting a viable headset is really a big challenge, much like autonomous vehicles. But that being said, Pramukta was like, "Hey, GPS just in and of itself has lots of users, we should explore that." And I was like, "You're absolutely correct." And so started calling up friends at rideshare companies, and other big tech and social data companies, insurance folks in the automotive space that were looking at a collision avoidance, and just started having as many conversations as I could with people that had businesses built around GPS, GNSS. And where did it cause problems? Where did they think there was opportunities to save money and improve it?

And the cool thing from a business perspective was basically everybody came back and said, "Yeah, GPS is kind of awful, but we've learned to live with it, because it's at the core of our business model." Some of the rideshare and food delivery folks shared anecdotes, where they said they've done analysis, where if they can save one second on average for the pickup of riders or the delivery of food, that they make another 1.5 million dollars a year. And it's just one second because, there's so many drivers, and so many rides and deliveries happening, that even the smallest improvement logistically there's a big difference from a revenue perspective. And so really dug into that one. On the insurance side, they were sharing, you can download one of these apps and it monitors your driving through your phone, or sometimes with a device that's put onto the vehicle. A friend from insurance was sharing that they just get all this vitriol, because when you drop lock on a GPS, it looks the exact same as somebody breaking hard on the road.

And some people would be getting dinged for driving down I75, and they'd lose GPS, then their insurance would get dinged because it was recorded as a hard braking event. And other folks, like SRI has done a lot of really groundbreaking work around collision avoidance, and a lot of folks have wanted to, within vehicles on the automotive space. And there's a lot of interest of, could we also tie in collision avoidance to pedestrians and bicycles, and things along those lines? But GPS just isn't accurate enough to do that. The other one that's really interesting is the gaming use case. There's amazing mobile based games like Niantic's Pokemon Go that's generated billions of dollars, but there's also a challenge because all these games are dependent on geofencing, and that is kind of the game mechanic. Is you put a geofence around an area, and when your GPS goes into that geofence, it triggers a game mechanic, the Pokemon shows up, you do whatever you're going to do with the Pokemon, and then the interaction ends.

But basically there's only so many ways you can spin that game mechanic with different IP and different experiences. So there's also a lot of hunger within the gaming industry for new mechanics. And one of the really cool things about this network based approach with GPS, is that the technique that Pramukta and the team, Kostas and Scotty have put together, it does a relative distance metric between all the devices. And that relative distance metric helps improve the accuracy as it fixed it into real world coordinates. But the other cool thing it does, is within that SRI work, we're below 20 centimeters, oftentimes 10 centimeter accuracy for relative position between the device. So one device gets close to another device, we have really good metrics for doing that. And so collision avoidance is one of the cool... Or not cool, but very socially useful aspects of that.

But another more fun one is doing a game of tag. You have entirely new game mechanics you could do, because with that very close relative distance, as your phone gets close to my phone, when they intersect each other, that becomes a pretty good metric for saying, "Hey, we've had a collision, we've tagged each other." And that opens up a huge new world of game mechanics. And I think just in general having that level of position, especially you start adding things like pose estimation to these, and tap into the IMU and sensor fusion. We definitely see a world where this could replace visual positioning systems, which are incredibly expensive and a huge impediment to AR, and become a new approach that could potentially open up that market that's been so challenging.

Chad Anderson:

Wow. And I mean, also really appreciate that you guys are very early in this journey and these are the initial sort of just initial conversations that you're having, and initial use cases that you're thinking up. I mean, the market here is massive, and I want to touch on that a little bit. But before we do that, look, I mean, positioning is undeniably valuable to all of us, as demonstrated by the fact that we carry it around and use it on a daily continuously throughout the day. We are leveraging this technology, it makes our lives better. So I guess in exchange for this, are people then... They need to share their location data with Zephr for these benefits?

Sean Gorman:

Actually no. So I think one of the things we thought about hardest as we got into this was privacy. It was from multiple perspectives. I mean, one, we saw how important privacy was when we were at Snap, and how conscientious they are with that and their user community, and all the pains they went through with live location, for instance, to protect privacy. Both internally from an engineering perspective, but also from an external messaging perspective, and helping the public understand what they're doing and how they're protecting privacy across that. And then just our own personal experience with being in the geospatial community for such a long time, and seeing stories where privacy has been abused, and really thinking about how we could engineer the product from the ground up to be privacy forward, and really have that as one of the core values and technical differentiators in what we're doing.

And so the easiest way to do this would be to send everybody's location up to a server with the measurements, and then calculate that improvement, store that improvement, broadcast their location out, and just aggregate a massive database of everybody's location. There's a ton of value in that. But we also felt like that was the wrong thing to do from an ethical standpoint. That anytime you aggregate that much data, even if you try to put anonymity around it, there's just too many ways to unwind that anonymity, like with the New York City's taxi cab database where that was all anonymous, and pretty quickly people were unwinding all sorts of personal information within it. And so we ended up taking a different approach, and that really what we need for doing our air correction or the satellite measurements, the personal information and personal location information about the users isn't relevant.

And so [inaudible 00:37:35] things like the pseudo range, the ADR, the carrier phase data, these measurements off the satellites that feed into the location equation. And so the way we've set it up is having those measurements pushed up to the server, those measurements associated with location are held in memory while that computation is done within the ensemble. And then instead of solving for the person's location, we actually solve for the error correction that needs to be made to that initial GPS data. And then we broadcast those error corrections down to the person's device, that error correction is used locally to determine their location, any of that location metadata to create that ensemble.

Once that calculation's done, it's dumped from memory and the new data comes in. We never save it to disc, never store anybody's location. And that from our perspective, I think generally from engineering practice, keeps things very clean. From the standpoint of we're just really in the business of improving the accuracy of the GPS data that your phone is generating. And then it's up to the app developer and the user for their relationship on how that data is shared. But from our standpoint, we very much don't want to be in the business of aggregating and brokering data, and having those kind of privacy concerns. It's really just about new mechanics and new ways to improve GPS, so that people have better app experiences and can solve new interesting problems through their mobile devices and wearables.

Chad Anderson:

Yeah, sounds great, thanks. So I want to get back to the use cases and how that turns into actually a market for what you do. So last year, the Geospatial World Forum estimated the size of the global geospatial market to be 450 billion. And it's projected to grow at a CAGR of 15%. So by 2028, the market is expected to reach a trillion dollars.

The geospatial economy is underpinned by high precision data collection, and GNSS and positioning remains the largest and fastest growing segment within that geospatial market, makes up about 55% of the total market share. So this is massive. I mean, we know that the US Commerce Department came out with a report a few years ago, talking about how GPS has generated a trillion and a half dollars of economic value in the US. We've written the GPS Playbook, which dives into that and the venture returns that are on the heels of this. So this is clearly a massive market, clearly from consumer, from enterprise, from government, everyone is using this, and it is improving business, and it's improving our lives. And so I guess I'm curious to hear, when you think about numbers that are sort of that large, how do you think about the market for what you do? How much of that can you capture? What segments are within range or out of range? Just how are you thinking about the market overall?.

Sean Gorman:

Yeah, it's a great question point, and the size of the market is very exciting. And I think one thing I love about this technology is that it impacts the vast majority of people. I mean, the number of mobile phones on this planet that are GPS, GNSS enabled is just humongous. And so the ability to build something that has the potential at least to touch a lot of folks, and improve lives, and create new opportunities is super exciting. But I think within that, we really had to focus on what area we thought that we could be most successful in, and what was most well-suited to the technology that Pramukta and the team had built out. And I think across that location-based services within mobile devices was a really natural fit, since we had this network effect that the more people that are on the network, the better the overall location and the better solutions that we could create, and the more geography that we could cover.

Mobile phones fit that well, there are billions of them across the planet, there's 3 billion Android devices alone, and this becomes a great substrate to launch a product into. And also, it's also where we heard people having the most challenges. The classic, I think everybody's had the problem where the Uber or Lyft driver shows up on the wrong side of the street or the wrong block, and then they call you, and you do this dance of trying to find each other in a setting, which sometimes can be have safety concerns, and it can just be annoying. And so it was one of these things where I think there was a natural fit in the economics of our approach and the way the technology was built out, based on the fact that it was networked. And so that's really become our focus of starting off with these location-based services within mobile applications, and specifically folks with a large number of users that will benefit from an improvement in accuracy.

And I think to your point earlier, of we've gone to the obvious places, but I think much of what excites me is, what will people do with this new capability that we've never thought of? What are the concepts that when you have really accurate relative distance, and you have really precise absolute location? And potentially pose prediction is a very tractable problem to add on top of that. That there's just a whole lot of cool things that you can do at scale with a lot of users, that's very exciting. So that kind of B2B2C is where we've been very squarely focused on. But longer term, I think there's nothing that makes this approach dependent on mobile devices. It'll work for any kind of GNSS receiver, the same kind of benefit of networking receivers together, which opens up other really interesting enterprise markets down the road. But I think with any good technology, you need to find your initial target and really focus in and execute on that well, and then look for growth opportunities.

Chad Anderson:

Sure. So for any of our listeners who are interested in seeing how Zephr can help, or integrating your network GPS solution into their apps, where can they go to learn more?

Sean Gorman:

Yeah, definitely check out our website, Zephr.xyz, Z-E-P-H-R. And there's a great facility there to get in contact directly with us, and we'd love to have feedback, love to discuss use cases. We've had a lot of really interesting ones recently where folks have asked about applying this to aviation and drones. Just wonderful experiences to learn and understand those use cases, and see how we might help. And then also how we can best direct the technology to fit the market.

Chad Anderson:

Certainly very exciting, massive market, really cool technology, and we're really happy to be involved. Sean, thanks for taking time out of your busy day to join us. Thanks very much for coming on the show.

Sean Gorman:

Oh, thank you. And thanks for the support of the company, it's been amazing.

Chad Anderson:

Thanks for tuning into the Space Capital Podcast. If you enjoyed this episode, please leave us a review and subscribe to make sure you never miss an episode. And if you're interested in learning more about investing in the space economy, I invite you to visit our website spacecapital.com, where you can get access to more industry leading insights and learn how you can join the entrepreneurial space age.

Order The Space Economy:
Capitalize on the Greatest Business Opportunity of our Lifetime.
Order The Space Economy
MORE INFO