PUBLISHED
February 2, 2024
BY
Space Capital

Understanding Spatial Computing

What is Spatial Computing?

With the launch of Vision Pro on February 2nd, Apple has boldly stepped forward, creating a new product category and positioning Spatial Computing at the heart of its next-generation ecosystem. This movement has the potential to usher in a new era of digital experiences. It promises a whole new level of convenience, personalization, and efficiency that exceeds our current interactions with devices with traditional screens. However, the true magic of the technology isn't just in the headset.

Figure 1. Apple Vision Pro users can creatively tailor the system to best fit their workstyle in a 3D environment

Let's first delve into two terms that have become popular in the tech world: Augmented Reality (AR) and Virtual Reality (VR). VR immerses users entirely in a digital, three-dimensional environment, allowing them to interact as if they were physically present in that environment. On the other hand, AR enhances our real-world surroundings by overlaying digital content onto the physical environment, enriching our interactions with the physical world. Positioned as a continuum between AR and VR, Mixed Reality (MR) emerged as a technology that blends the physical and virtual worlds to create one of a kind environments where physical and digital elements coexist and interact seamlessly in real-time. What sets it apart is its deep connection to the physical world, notably geography integration and a firm grasp of physical context. This comprehensive term has been widely used to encapsulate a wide range of experiences in the Spatial Computing realm. Since Apple has a reputation and history of defining industry terms and standards, such as popularizing the term ‘App’ instead of software application, the tech giants’ influence and leadership in consumer technology could give it the leverage to crown Spatial Computing as the official industry term for AR/VR technologies.

The term "Spatial Computing", however, was not a new term. It is often attributed to a thesis paper by Simon Greenwold in 2003, where he defined it as follows: "Spatial computing is the interaction between humans and machines, where the machine retains and manipulates references to real objects and spaces. It serves as a crucial element in making our machines more integral partners in our daily activities, both work and leisure." This paper is intriguing because it reimagined Spatial Computing in a manner more aligned with ubiquitous computing than conventional virtual and augmented reality concepts.

This novel perspective is closely linked to the notion of an "invisible computer” that came up even earlier, as championed by Donald Norman in the 1990s. Norman's concept of the "invisible computer" doesn't aim to make the physical device disappear; rather, it strives to synchronously integrate computer interactions into daily routines to the extent that users are unaware of consciously using a computer. This is important because as humans, we have an innate ability to perceive and interact with the physical space around us and it’s only natural that computers should adapt to interact with us in our environment, rather than forcing users to conform to their limitations. In this sense, technology becomes "invisible" by fading into the backdrop of our consciousness.

Figure 2. Ubiquitous Computing amalgamates real-time data from a wide variety of digital devices a to produce context aware insights

Taking all of that into understanding, we look at Spatial Computing as an omnipresent, ubiquitous computing approach that harmoniously integrates the digital layer of information with the physical world, becoming an integral yet subtle part of the environment. This enables users to flexibly interact with digital information in their physical environment through multimodal interactions that best fit their preferences and requirements. Aside from visual experiences brought by VR and AR, haptic rumbles, chimes, and audio cues can all complement the virtual experiences, making it more tangible and accessible. It is meant to be unobtrusive and would not require you to frequently initiate requests or navigate multiple apps on a smartphone. 

The objective of Spatial Computing is to blur, and ultimately eliminate the boundaries between the physical environment and digital worlds, fostering an immersive, intuitive, and human-centric digital experience that aligns with our natural human senses and instincts. This technology holds the most potential to revolutionize our daily lives by reducing the friction in the small yet intricate everyday tasks, by delivering real-time information and interactions precisely when and where they are needed. Simply, being spatially aware makes technology feel like magic, and bestows upon us superhuman-like capabilities.

Figure 3. Spatial Computing could revolutionize our interaction with the digital world in a fundamental level

At Space Capital, we have been following the development of Spatial Computing closely for years. We have already made several investments into this emerging field, including 3D asset management platform echo3D, 3D spatial analytics platform Cognitive3D, and networked GPS solution Zephr. They were driven not merely by our excitement for its transformative potential, but due to a clear recognition of the critical role space technology plays in this digital revolution. We foresee that as Spatial Computing develops, its application will broaden to various indoor and outdoor environments from crowded urban centers to isolated industrial sites, where ensuring ubiquitous, reliable, and enriched digital interactions becomes essential. From GPS enhancing positioning and navigation, satellite connectivity enabling uninterrupted digital processing on a global scale, to the rich contextual data it helps capture, space technology is forming an indispensable thread in the Spatial Computing tech stack.

The Symbiotic Relationship of Spatial Computing with Space Technology

Spatial Computing represents not a new form of computer processing technology but rather a sophisticated integration and application of existing technologies, tailored to provide contextually rich and responsive data. This approach enables more proactive assistance in our actions and offers diverse interaction modalities, ideally suited to individual user needs. Such a dynamic computing environment underscores why space technology is fundamental in driving Spatial Computing forward.

Figure 4. Spatial data and processing is underpinned by space technology through location and context

As a technology designed to be mobile, Spatial Computing must transcend confined spaces to maintain a constant understanding that adapts to our movements and environments. The root of this adaptability is precise location awareness – knowing not just where we are, but also our orientation and intended destination. Global Navigation Satellite Systems (GNSS) like GPS, GLONASS, Galileo, and BeiDou are crucial for any location-based interactions, enabling everything from context-aware services, immersive experiences, to enhanced navigation capabilities. They effectively anchor us and any digital interactions to the physical world. This also necessitates augmenting current GPS precision to ensure reliability, especially in areas with weaker signals. For instance, integrating technologies like computer vision, advanced inertial sensors with GPS data help maintain location accuracy in urban canyons or dense forests. 

Next, Spatial Computing needs to comprehend not just our location but also our intentions and the relevant elements in our surroundings. This requirement places a high emphasis on detailed, up-to-date mapping and the integration of hyper localized data, much of which is captured through satellite and remote sensing technologies. Earth observation satellites employ a wide range of sensors to monitor global environments and provide a massive amount of real-time geospatial intelligence (GEOINT) data, such as high-resolution images, weather and climate, various land cover data points including topography, vegetation, as well as human activity and urban development patterns. All are essential for a broad spectrum of Spatial Computing applications. Simply, satellites enable our devices to perceive our world and its myriad details, much like we do, but go much beyond our visual capabilities and knowledge. GEOINT data forms the backbone of an intelligent system that can anticipate needs with enhanced situational awareness, inform actions, and provide critical information to aid decision making, all in real-time. 

Figure 5. LEO SatCom development is paving the way for unparalleled global access

Since the aim of Spatial Computing is to deliver information at a speed that feels instantaneous and natural, enhancing the user experience to the point where the technology becomes an intuitive extension of oneself, it should demand minimal input from the user and avoid any hiccups in performance. At the core of an uninterrupted Spatial Computing experience lies the need for consistent, high-speed connectivity. As real-time data processing becomes increasingly crucial, any delay or interruption could significantly disrupt the immersive experience. Here, new Low-Earth Orbit (LEO) Satellite Communications (SatCom) satellites play a pillar role. Due to their closer proximity to Earth compared to traditional satellites, LEO systems like SpaceX's Starlink, Amazon’s Kuiper and EutelSat OneWeb's satellite constellation offer perpetual data flow with reduced latency and faster data transmission. SatCom is known for their resilience and reliability, even in challenging conditions like remote areas where terrestrial internet infrastructure is absent and extreme weather events.

Apple’s Spatial Computing Transformation is a Decade Long Journey

A prime example that illustrates the interplay between Spatial Computing and space technology is Apple. It's a little-known fact that Spatial Computing has been a cornerstone of Apple's strategic vision for many years. The company has made a series of calculated, progressive steps that began with the iPhone. This key aspect of their long-term planning is only starting to come into the limelight with the Vision Pro headset. Tracing Apple's trajectory back through time really reveals how space technology has been instrumental in catalyzing their innovation.

GPS Paving the Road for Revolutionary Positioning Technologies

The iPhone, at its inception, was the genesis of a new spatially aware ecosystem. The introduction of GPS functionality in the iPhone 3G in 2008 marked a significant milestone. GPS’ ability to show our precise location on Earth not only enhanced navigation and mapping experiences but also laid the very foundation for highly sophisticated Spatial Computing capabilities. The introduction of the barometer sensor in the iPhone 6 in 2014 is the next key improvement. Barometers can detect changes in air pressure, enabling altitude measurements and more accurate GPS positioning.

ARKit, Apple's groundbreaking augmented reality framework introduced in 2017, leveraged the full potential of Apple’s hardware advancements and has played a pivotal role in expanding the horizons of Spatial Computing development. Utilizing Visual Inertial Odometry (VIO), ARKit precisely tracks device movement and orientation by combining camera and inertial sensor data. This fusion enhances spatial positioning and object placement accuracy, ensuring that virtual objects are precisely anchored within the user's real-world context. It also supports shared experiences, allowing multiple users to see the same virtual objects from their own perspectives in a shared space. ARKit not only complements GPS technology but also significantly augments it, allowing machines to understand and users to interact with their surroundings in entirely new ways.

Figure 6. VIO is particularly useful in GPS-denied environments to determine the position and orientation of objects

Subsequently, in 2019, the integration of the U1 chip into Apple devices into iPhones, Apple Watches, and HomePods brought a new dimension of spatial interaction. The U1 chip utilizes ultra-wideband (UWB), think of it as a super-accurate indoor GPS. What makes UWB special is its ability to work with very little interference, making it more reliable and efficient than common technologies like WiFi and Bluetooth. It achieves this by sending out rapid signals to detect the exact position and direction of other devices with the same technology, precisely determining their positions, directions, and facilitating high-speed data transfer and communications.

This new capability enabled some exciting features. For example, using the U1 chip, an iPhone and HomePod can locate each other, enabling seamless music transfer through HandOff. Beyond this, the U1 chip could allow users to control other non-U1 devices using ARKit. The possibilities are vast, ranging from controlling lights and appliances, unlocking car doors when approaching, to accessing menus at restaurants using spatial shortcuts.

Figure 7. U1 chip has redefined device traction with an accurate and responsive sense of direction and distance

In 2020, the advent of LiDAR technology in Apple devices marked another significant leap in spatial perception. LiDAR acts like a bat's echolocation but with light, creating detailed 3D maps of the environment. This technology enabled devices to understand space and dimensions with an unprecedented level of accuracy. Such depth sensing capabilities not only enhanced augmented reality interactions but also bolstered everyday applications like photography and navigation. It represented a shift from flat, two-dimensional interactions to a more dynamic, three-dimensional engagement with the digital environment.

Figure 8. LiDAR’s enhanced depth perception has been critical for shifting user experience in photography, navigation, and AR, from flat 2D to dynamic 3D digital interactions

Apple's pursuit to elevate their devices’ spatial capabilities of their devices didn't slow down there. The development of the FindMy network in 2021, powered by GPS and UWB, allows users to locate Apple devices down to the centimeter level. The introduction of AirTags takes this to the next level by enabling real-world object tracking on a global scale. AirTages are small spatial beacons that emit a Bluetooth signal that anonymously connects to any nearby device active within Apple’s FindMy network. The AirTag’s location is then triangulated based on the strength of the Bluetooth signal sent to those third-party devices. In 2022, as GPS technology evolved Apple integrated additional innovations like precision dual-frequency GPS and advanced inertial motion sensors into selective iPhone and Apple Watch models. Dual-frequency GPS allowed for better accuracy in urban areas, where traditional GPS signals can sometimes be obstructed or less reliable. Inertial motion sensors complemented GPS data by continuously monitoring device position and orientation, even when GPS signals were lost. 

This continuous evolution and refinement of positioning technologies have been pivotal in Apple's Spatial Computing strategy. By doing so, Apple has not only improved the precision of spatial data but also revolutionized personalized user-machine interactions within a three-dimensional space. This also demonstrates how GPS has become the bedrock upon Apple’s sophisticated, interconnected ecosystem of spatial technology.

GEOINT Forming the Backbone of Enhanced Spatial Computing Context

Just as Apple has meticulously refined positioning for over a decade, they have also paralleled this effort in building unique mapping experience with increasingly rich geospatial intelligence over the past decade. This initiative first gained momentum following the iPhone's debut in 2007, when Apple recognized the critical need for an integrated mapping solution. Initially, they licensed Google's data but soon realized the necessity of creating an indigenous map system. This important pivot was driven by a strategy to fully control the user experience and to protect customer location data from external entities like Google. 

In 2012, Apple took a bold step by replacing Google Maps with its own Apple Maps. This move was not just about launching an alternative mapping service; it was about reimagining what a map could be. Apple's ambition for Maps was to transform it into a gateway for enriched navigation and immersive spatial interactions, seamlessly merging spatial data with user-centric functionalities. This vision drew inspiration from the ethos of the Michelin guide, which was to stimulate travel and exploration. However, the transition to Apple Maps was met with significant challenges. The initial release was marred by technical issues, leading to widespread criticism and a public apology from CEO Tim Cook. Despite these early setbacks, Apple's dedication to a spatial future drove them to embark on a ten-year journey to hone and evolve Apple Maps. 

This endeavor saw them amalgamating expertise from leading mapping organizations like OpenStreetMap, establishing strategic partnerships and carrying out various mapping efforts, which include satellite imaging, LiDAR integration, and even deploying their fleet of sensor-equipped vehicles to gather precise data with enhanced depth perception and enables better object recognition, further improving the accuracy of spatial localization. Their commitment shone through their attention to detail, such as frequently surveying subway stations and highways to provide accurate exit directions for users.

Figure 9. Apple Maps has evolved significantly with road and environmental scene details to offer more immersive navigation experiences

Since introducing indoor maps in 2017 for users to better navigate complex major airports and shopping centers, and a complete redesigning of Apple Maps in 2018 for more detailed coverage, Apple has made significant strides. The launch of Look Around in 2019 provided a Google Street View-like experience with higher imagery resolution and smoother transitions. The introduction of real-time route conditions, public-transit schedules, cycling directions and EV routing in 2020 reflected their attention to diverse travel options.

Figure 10. Apple Maps has outperformed Google in providing precise navigation within many complex indoor environments with a commitment to high-fidelity 3D visuals

In 2021, Apple Maps released visually stunning 3D map updates with unprecedented quality. This feature provides elevation, road labels, custom-designed landmarks, and a beautiful night time mode. In 2022, step-by-step walking guidance in AR was also implemented, allowing users to interact with virtual overlays in real environments and enjoy immersive navigation directions in select cities. In 2023, Apple Maps incorporated AI for personalized navigation. By analyzing user travel habits, it provides personalized routes and adapts to individual needs. Such rich, detailed mapping is vital for Spatial Computing applications, which require a deeply nuanced understanding of the environment and spatial awareness to enhance curated AR experiences. Moreover, Apple’s latest native app Journal proactively detects the locations linked to users’ photos, then it prompts users to chronicle their journeys with Apple Maps embedded within. The app also makes intelligently curated personalized suggestions to help users create a rich tapestry of memories filled with interesting location-based records, embodying yet another aspect of Spatial Computing with GEOINT.

Figure 11. Apple Maps’ enhancements in city details and 3D experiences have extended beyond buildings and landmarks

These continuous enhancements laid a foundational framework for Apple's broader Spatial Computing ambitions. As Apple Maps continues to expand its detailed city experiences, the opportunities for more enriched and localized computing experiences are set to grow. Urban exploration apps could utilize this to craft immersive guides, spotlighting local landmarks, cultural spots, and hidden gems. Interactive AR overlays could bring historical landmarks to life, offering 3D bird's-eye views across different eras, alongside suggestions for activities and engaging information. Indoor navigation could transform airport experiences, guiding users straight to their gates, or navigating them through sprawling malls to specific stores. The Look Around feature stands to redefine neighborhood exploration, offering insights into nearby points of interest, demographic data, and safety information, thereby adding a layer of engagement to the experience of finding a new home or discovering tourist spots. For cyclists, having a navigation system that superimposes directions and transit updates right into the user's field of view, blending intuitive guidance with informative content such as bike lane information, and EV charging station locations already aggregated by Apple Maps could meaningfully improve commuting efficiency.

At the core of these transformative experiences is the integration of dynamic GEOINT data, a key component in aligning Apple's vision of an interconnected Spatial Computing environment to tailored user activities. The continual infusion of freshly updated spatial data into Apple Maps is essential for ensuring that devices are not only highly responsive but also exceedingly relevant as users navigate the physical world around them. Leveraging the power of advanced Earth observation satellites, Apple Maps is harnessing high-resolution imagery that makes the construction of more accurate and vivid 3D models possible. This is not just about visual fidelity, rather it's about ensuring that the digital representation of the world is as true-to-life as possible. Additionally, satellites' ability to constantly monitor and capture every bit of environmental changes is crucial to keeping Apple Maps up-to-date and finely tuned to the user's location and context. In urban areas where the landscape is ever-changing, timely reflecting new developments, road modifications, identifying hazards and other critical changes is the only way to ensure an adaptive and seamless user experience.

SatCom Bridging the Gap in Global Connectivity

Apple's announcement of integrating satellite communications into its devices in 2022 makes iPhone the world’s first satellite smartphone and it represents a milestone in the advancement of modern communication technology. Partnered with satellite operator Globalstar, Apple introduced a groundbreaking feature known as Emergency SOS via satellite. This feature is designed for use in environments where traditional cellular and Wi-Fi networks are unavailable. It allows users to text emergency services, request roadside assistance, and share their location with friends and family when off the grid. This satellite connectivity is crucial in providing safety in emergency scenarios, particularly useful for travelers who often find themselves in remote areas with limited to non-existent cellular coverage. The iPhone guides users to point their device correctly to establish a satellite connection. Apple developed a unique compression algorithm to make these emergency text messages smaller and faster to transmit, considering the lower bandwidth of SatCom compared to cellular networks.

This initiative began with the iPhone 14 series and is planned to extend to the Apple Watch next. It will allow users to send emergency texts and SOS responses directly from their wrist, which is lifesaving in situations where accessing a phone is not feasible. In the realm of disaster management and response where every second counts, Apple's integration of SatCom into its devices is a game-changer without a doubt. Rescue teams, equipped with this reliable connectivity, can access real-time maps and critical data overlays. Spatial Computing could facilitate a shift in emergency operations and dramatically improve the effectiveness of coordinated real-time response, especially when both rescuers and those in need are linked via satellite.

Figure 12. Apple’s satellite feature ensures connecting users with emergency services in all conditions

Yet, this isn't just going to be a limited feature for emergencies only in the future. With the proliferation of SatCom, the scope for enhancing coverage even in areas with moderate cellular service is substantial. This expansion provides signal redundancy, ensuring consistent and uninterrupted data flow for Spatial Computing applications such as dynamic mapping, enhanced location-based services, and richer augmented reality experiences, which all inherently demands robust data processing capacity wherever the user travels to.

Spatial Computing Spanning Across the Entire Apple Ecosystem

Figure 13. Apple Vision Pro represents a hallmark moment in Spatial Computing history, empowering users and developers to envision new possibilities

Apple has always been at the forefront of integrating technology seamlessly into daily lives and setting a new standard to user experiences, their ventures into Spatial Computing are no exception. For years, the industry lacked a compelling vision. Many attempts at building basic Spatial Computing features for wide adoption have been marred by hardware limitations, privacy concerns, or simply, an unclear purpose. With the Vision Pro, Apple seems to be filling this void. By equipping the headset with twelve cameras and six microphones, Apple is making a statement about quality and futuristic features. Furthermore, these sensors bring on novel technology for predicting eye movements and gesture-based controls, that may very well push the boundaries of Spatial Computing user experience.

This strategy bears resemblance to Apple's historic approach. The original iPhone was not the cheapest on the market, but it heralded a transformation in mobile technology. Similarly, the Vision Pro's pricing and features may very well be Apple's method of setting the standard in AR. By releasing the Vision Pro, Apple has laid the foundation, allowing users to start getting used to the new interaction and shared experiences with people and machines in a digital world, reminiscent of how the iPhone revolutionized the mobile experiences.

In wrapping up this exploration of Apple’s progressive journey in building out their Spatial Computing tech stack, it's unmistakable that space technology has been, and will continue to be at core where this future takes shape. Here, technology transforms from a mere tool into an almost sentient-like presence in our daily lives. As we look ahead, the possibilities are indeed boundless. Spatial Computing is set to revolutionize not just how we interact with technology, but also how we perceive and engage with the world around us.

This new era will challenge us to rethink the fundamentals of human-machine interaction. How will our social norms and daily routines evolve when our environments are enriched with layers of digital information? What new forms of creativity, collaboration, and exploration will emerge as the barriers between the physical and digital worlds dissolve? One thing is certain: the horizon of innovation will continue to expand, enhancing our understanding of technology and the world in ways that are currently beyond our imagination.

THE AUTHOR

THE AUTHORS

No items found.