medium.com Open in urlscan Pro
2606:4700:7::a29f:9904  Public Scan

Submitted URL: http://workingfrom.space/
Effective URL: https://medium.com/immersedteam/working-from-orbit-39bf95a6d385
Submission: On November 16 via api from US — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

Open in app

Sign up

Sign in

Write


Sign up

Sign in




WORKING FROM ORBIT


VR PRODUCTIVITY IN (OR ABOVE) A WFA WORLD

Paul Tomlinson

·

Follow

Published in

Immersed

·
26 min read
·
Sep 27, 2021

2.3K

19

Listen

Share


View from my headset aboard Immersed’s “Orbitarium”


LISTEN TO THIS ARTICLE: WORKING FROM ORBIT - PAUL TOMLINSON


PLAY WORKING FROM ORBIT - BY PAUL TOMLINSON, VIA SOUNDCLOUD

soundcloud.com



I float in space, surrounded on all sides by a grand view of the Milky Way
Galaxy. A movie-theater-sized screen hangs before me, gently curved, everything
at the perfect viewing distance. Eight different panes glitter with code, facets
of a technological jewel granting views into the brain of a system responsible
for moving tens of millions of dollars a day. A communications console canted
like a drafting table at my fingertips holds a workshop of quick-fire exchanges
with my colleagues, my meeting calendar, various API references, and camera
feeds of the “real” world. To my left, abutting the mammoth array of code, a
two-story tall portrait display shows the specifications for the task at hand
atop an ever-present Spotify playlist. I crank the tunes and get into my flow.

But this isn’t an excerpt from some Ernest Cline novel—this is my every-day
experience. This week, I’ll spend 40–50 hours in Virtual Reality, like I did
last week and every (work) week for the last 2½ years. It’s not just fun and
games — there are plenty of those, along with exercise, meditation, creativity,
socializing, etc. — but for this article, I’m only focusing on (and counting)
the work.

Yes, really: 8–10 hours a day strapped in. I’ve encountered a fair amount of
skepticism about both the technology and the general premise, many nit-picks
about the software, or how it fails to match some preconception about how things
“should” work. Reddit in particular is full of naysayers, to whom I will rebut:

> People saying: “It can’t be done,” are always being interrupted by somebody
> doing it. — Puck

The technological timeline is replete with supposedly doomed-to-fail notions and
novelties that went on to wild success. Most were not born “fully formed”, and
required several generations to grow and adapt as we grew and adapted to them.
And no, this isn’t for everybody, not yet — but not only is it possible now,
it’s the only way I use my work computer anymore.

My work is not VR related, either; it’s regular old programming, information
systems development, and office stuff I just happen to do in VR. The strategy
and tools apply to any computer and communications work right now and will apply
to nearly everything in the future. “Working remotely” takes on entirely new
dimensions when the distance from the beach to low-Earth-orbit is a single
click.

Why am I telling you this, and why should you care?

To answer the second question, look at the Personal Computer boom of the 1980s:
diehard computer geeks of the day saw incredible potential in the tools, and
while they (we) spent a lot of time goofing off with the technology itself,
still helped shape that state of the art into something useful to the general
population. Not everyone was on board with the clunky beige boxes and the
soothing squeal of a modem handshake at the time, yet in 2021 fully half the
world carries a sleek pocket computer connected to the Information Superhighway.

Right now, Virtual Reality is at its “1980’s beige noise machine” stage: a geek
cynosure and a consumer novelty. What’s coming will look very different from
what’s here; nevertheless, the DNA is already taking shape and it’s not going to
take another 40 years to change the world.

On that first question: I’ve been a full-time VR worker since April 2019,
spending in the neighborhood of 4,500 hours banging away at real work on virtual
screens. It’s not a stretch to say I’m in the top few percent of VR users on the
planet; I’ve spent much time watching developments in the field and
extrapolating future possibilities. I don’t insist on my version of the future,
but I hope what I’ve seen is worth sharing.

I’m not going to cover everything — simply give a taste of how I make it work
for me, and get people thinking, talking about, and pursuing the possibilities.




DEFINE “VIRTUAL”… 🤔

In common use “Virtual Reality” refers to a headset worn by the user, presenting
them with a three-dimensional computer-generated environment. It’s the same kind
of interactivity and visual feedback available in modern video games or
simulators, only more engaging due to the sense of scale, presence, and
all-encompassing field of view.

The cool kids have added several new companion phrases for how simulated content
might blend with the physical environment, such as “Mixed Reality” (MR) or
“Augmented Reality” (AR), with the umbrella term “Extended Reality” (XR)
covering everything. For the sake of this article, we’ll stick with “VR”.

This is only the latest iteration in a long line of “virtual” environs and
abstractions, a logical evolutionary step. Books are fantastic examples of
conceptual environments: models of real or fictional actors, events, settings,
and ideas—but static, frozen in time. Theater is a better illustration, a
manufactured (virtual) scenario presented to an audience as a dynamic shared
experience.

Technology has refined and distributed these shared experiences throughout
history. Live theater evolved to movies, then television, then
dynamic/responsive video (i.e., games): from stage to silver screen to LCD. It’s
the same with music or audio performances: live presentation gave rise to the
phonograph and its successors — home stereos, boom boxes, and the walkman moving
it ever closer to the “user”.

We’ve had virtual audio strapped to our heads for over 100 years, and have
finally reached a point where consumer-grade virtual visual can join it. Those
visuals bring with them the illusion of space and scale and make it easy to
translate physical actions into contextually rich commands. The dynamic,
responsive, and immersive experience coopts our senses into perceiving the
simulated environment as our inhabited space: our Virtual Reality.

Interacting with computer-generated worlds can be an engaging, magical
experience — but is by definition immaterial unless there’s something to connect
those actions to non-virtual outcomes. Sometimes that’s advantageous, like heavy
equipment training, or rehearsing medical procedures. Inherently immaterial
activities, those whose only product is communication, or a digital artifact,
make no such trade-off — which brings us to computer work in an information
economy, AKA “my office.”




THE SETUP

It’s using a computer, but… in VR. There are several reasons for doing so which
I’ll cover later, however, I want a disclaimer to set expectations: in the
current “beige-box” stage, work is still work, meetings are still meetings, and
what makes this valuable has everything to do with comfort, productivity, and
cost (plus my utterly nerdy fascination). Most everything takes place via
traditional software common to any office environment: Google products, Zoom,
Slack, spreadsheets, word processors, IDEs, etc.


Laptop → Immersed → Router → WiFi → Headset → Magic

I use a regular laptop, reasonably powerful but not a gaming PC, and well below
“VR Ready” specs. The computer doesn’t generate the virtual environment, it only
runs the regular office software with one important addition: a small agent
program, Immersed, which encodes screen contents as video streams and beams them
to the headset via WiFi.

My current headset is an Oculus Quest version 2 (starting at $300 at the time of
writing), which receives those video streams and renders them as displays inside
an environment of its own making. Wireless and standalone, it runs its software
without the aid of a PC (most of the time), trading higher-end performance and
graphics for freedom of movement and the convenience of setting up anywhere. The
trip from video encoding on the PC, over the WiFi, and display in the headset
averages about 3ms for me, using a 5ghz access point that’s several years old.
0.003 seconds is well below human perception, making it effectively
instantaneous.


LAYOUT

These days I use a three-screen setup (I have other saved layouts on standby for
different kinds of work/focus): my Main work display, a Reference screen in a
portrait orientation where I keep details and materials on the active task (and
a Spotify window for cranking tunes), and a smaller landscape display with
Communications, calendar, code documentation, and so forth.


It’s all about scale

One of those displays corresponds to the laptop’s physical screen, and the other
two are purely virtual — 75% of my pixels are conjured into being via
behind-the-scenes wizardry. The laptop thinks they’re real, applications work
fine, but the only way to see them is in the headset where they’re brought to
life. I could add more — Immersed supports up to five such virtual displays —
but I use a screen-content manager as well (also confusingly called “virtual
desktops”) which allows these three displays to act like fifteen. My visual
field and attention are maxed out as it is.


It’s not all the eye can see; but it’s more than all the eye can use.

The resolution of these very large displays is surprisingly average—1080p
(Reference, Communication) and 4k (Main). This makes the dot pitch unimpressive
by the numbers, though still more than twenty-five times that of a roadside
billboard display. Higher resolutions are available, but this is my calculated
trade-off between pixel parity (more on that below), computer performance, and
latency. Applications are tuned for readability and crispness, emphasizing
information density over anti-aliasing or smoothness. The result is a character
count of 511x129 on my Main screen when writing code: 66,000 characters is a LOT
of code, easily the equivalent of four to six typical IDE windows in width (and
enough to show the entire text of this article 1½ times, all at once).

Input is via standard keyboard and mouse (or trackball, in my case)—floating
per-application interfaces, or waving my hands around a la Minority Report,
aren’t a thing yet (for which I’m frankly grateful, doing that all day would be
exhausting). Other tools work well, such as voice recognition or graphics
tablets, but aren’t suited to programming. I touch type so it doesn’t matter
that I can’t see my keyboard, but there are options for bringing it into the
environment by using a virtual overlay, or (soon) opening a “portal” by way of
the headset’s tracking cameras to see a video of your surroundings (FaceBook’s
own Horizon Workrooms uses this feature right now).

Since all I need is a keyboard, mouse, and a place to park myself, I’ve
completely ditched the traditional desk. I can use a floor setup for part of the
day and mix it up with a standing arrangement for the rest. I adjust as needed
for comfort, efficiency for the current task (reading email or research vs
writing code, for example), or can reclaim the entire space if I need an
exercise or meditation break, want to visualize some problem solving, or a round
of golf breaks out.


Floor chair, keyboard tray, and giant mug

Standing configuration

Not a single visible display in the entire office


TIPS

A few quick pointers before I talk about the experience, roughly in order of
importance/utility:

 1.  Find the sweet spot for your eyes. Adjust the IPD and lens alignment: left,
     right, up, down, distance, and tilt. Work at it one eye at a time, and walk
     it in. Pixels are not evenly distributed — the highest density is right in
     the middle of the sweet spot, mimicking the acuity of the fovea.
 2.  If you need corrective lenses, get lens inserts: it’s superior to wearing
     glasses, and I find it better than wearing contacts. For horrible
     astigmatic myopia like mine (-7.5), it was cheaper than most pairs of
     glasses I’ve had, and a totally reasonable expense since I use them all day
     for work.
 3.  CLEAN your lenses. No matter what you’re looking through, make sure you can
     SEE through it! This applies to the lenses both in and out of the headset —
     the tracking cameras need a consistent view of the world, and that means no
     smears or smudges.
 4.  While you’re at it, clean the headset too: a regular antimicrobial
     wipe-down will keep things healthy and pleasant. An impermeable contact
     surface also helps (polyurethane or silicone interfaces, for example).
 5.  Aim for pixel parity: a virtual screen is a picture of a picture, meaning
     the headset’s resolution and pixel density are the limiting factors. The
     closer a virtually rendered screen’s pixel density is to the headset’s, the
     sharper the picture will be. The Quest 2 is 1832x1920/eye (Vive Focus 3 is
     2448²/eye) — displays can be larger than that, but will look best if the
     maximum visible portion is closest to 1:1.
 6.  Don’t bother emulating physical screen configurations! Virtual displays can
     be any size and any place, not just imitations of their physical
     counterparts. Walls, theaters, backdrops, tabletops, wrap-arounds—all kinds
     of possibilities.
 7.  “Large & far” is more comfortable than “small & close” for an equivalent
     field of view — it takes the same amount of visual space, but is far more
     relaxed for the eyes owing to the difference in convergence point.
 8.  Ergonomics still apply: avoid prolonged neck turning, head tilting, or
     other static positions. Place things to encourage occasional movement
     without holding any one posture overlong.
 9.  Good WiFi matters: maximize bandwidth and minimize hops. A clear 5ghz
     channel will give you 866mbps, and if possible should be directly between
     the headset and the computer—or at a minimum have the computer hard-wired
     to the router.
 10. Read (or watch) the manual. Know how everything works, and work with it —
     the patience to get your environment perfectly tuned pays exceptional
     dividends.




THE EXPERIENCE

Screenshots don’t do it justice. Videos fail to capture the scale and grandeur
of the experience. These pictures are a poor illustration of what I see in the
headset—low resolution, compressed field-of-view, lacking depth and scale. The
low display down front? That’s the size of an executive desk. The code is like
an IMAX® theater—I can’t even see all of it at once.


Entire libraries at a glance

What’s it like to actually use? In a word: comfortable. Given a few more words,
I’d choose productive and effective. I can resize, reposition, add, or remove as
much screen space as I need. I never have to squint or lean forward, crane my
neck, hunt for an application window I just had open, or struggle to find a
place for something. Many trade-offs and compromises from the past no longer
apply — I put my apps in convenient locations I can see at a glance, and without
getting in my way. I move myself and my gaze enough throughout the day that I’m
not stiff at the end of it and experience less eye strain than I ever did with a
bunch of desk-bound LCDs.

Complete control over my visual environment is like using noise-canceling
headphones for my eyes. I can choose the levels of color energy and busyness
best suited to whatever I’m focusing on, usually minimizing contrast the same
way you would with a bias light for TVs. I’ll float in a nebula while looking at
code in a dark IDE, chill out on Olympus in a sunset cloudscape while composing
slides for a presentation, or overlook a tropical lagoon while grinding through
email. Anything I’m doing, I’m doing with complete focus in a perfectly matched
atmosphere.


You truly belong here with us among the clouds.

I tend to prefer simple 360-degree photos over more elaborate environments out
of practical consideration: my displays are larger than will fit in any
reasonable facsimile of an indoor setting — it doesn’t really work to cram a
movie theater into a café. There are plenty of those environments from which to
choose, and I expect the collection will keep growing as the system continues to
mature (eventually incorporating real-world Augmented Reality elements). An
alpine chalet with a crackling fireplace as snow gently falls outside? Check. A
hipster coffee salon? Covered — a popular place for hanging out with other
users. Or an open-walled wooden lodge with a variety of times and weather
available, a grand corporate plaza with a waterfall, etc., etc. Outside of
managing performance, there are no limits on what kind of environments are
possible.

I’ve found it useful to play around with those environments, too: in the coffee
shop setting I overlaid a piece of artwork on the wall with one of my screens,
turning it into a TV for a more realistic ambience. In the plaza, I re-created a
lecture hall by using one display as a stand-in for the projector showing my
slides, another as my lectern-mounted laptop with speaker notes, and a “talk
timer” display in the front row. By practicing my presentation for a semi-annual
company retreat (pre-COVID) and ironing out the flow I earned one of the highest
speaker review scores of the entire week—it went exactly as rehearsed in a
near-identical real-world setup.

This kind of comfort and productivity isn’t automatic or accidental, it’s
something I consciously refine to maximize the human side of my work equation.
It’s also not limitless; I’ve traded one set of compromises for another,
optimizing for a different set of concerns more in line with my needs. I worked
up to my full-time schedule, became accustomed to the weight and fit of the
headset, manage the temperature and airflow of my office as needed, and
regularly clean the contact surfaces. I take periodic breaks to hydrate, stretch
my legs, and recharge myself.

I’ve lightly accessorized the headset too: replaced the stock facial foam with
more thickly padded polyurethane-leather ($30), changed the head strap for a
halo mount ($50), and added prescription lenses to eliminate the need for
glasses or contact lenses ($70), altogether increasing the cost 50% over the
base price of the Quest 2 (not that I paid the base price — I ordered a model
with higher storage capacity, and this is only my latest headset). I treat the
entire rig as a business expense/investment and am not disappointed.

There have been some delightful surprises on the application side, too. Using
regular software in that virtual environment is still like using that regular
software, albeit with a lot more elbow room. There are a few notable exceptions
though — the ones that really benefit from the increased real estate are those
involving visual communications: when “seeing the big picture” is no longer
metaphorical, and you can make out the fine details at the same time (forest +
trees). A stand-out example is the very aptly named MURAL, which when given an
entire wall (or movie screen) can be a sublime experience in creativity and
shared comprehension.

Some old-school tools are similarly supercharged: an eight-way (or more)
tmux/vim split with multiple files, terminals, unit tests, and logs, creates a
massive kind of working memory — maintaining end-to-end context in otherwise
sprawling complexity.


APART TOGETHER: THE SHARED EXPERIENCE


Bigscreen VR Selfie

Meetings are best in person, in VR, in MURAL, and in Zoom — in that order. As a
remote worker of several years, “in person” is a rarity for me — so I use VR to
preserve the feeling of shared presence, of inhabiting a place with other
people, especially when good spatial audio is used. Hand tracking enables
meaningful gestures and animated expression, despite the avatars cartoonish
appearance — somehow it all “just works”, your brain accepts that these people
you know are embodied through these virtual puppets, and you get on with
communicating instead of quibbling about missing realism (which will be a
welcome improvement as it becomes available but doesn’t stop this from working
right now).

I’m calling out MURAL again because of an important finding from VR
collaboration: sometimes being in the same place and looking at the same stuff
is a good way to meet, and remains the standard in conference rooms across the
globe — but the term “Death by PowerPoint” was not coined ex nihilo. Countering
PowerPoint prison by giving each person their own view of shared content, under
their own control, is a big step depending on the content. If it’s still slides,
all you’ve done is invent “meeting TiVo” — Google Docs is an improvement, though
still confined by its linearity. MURAL’s working canvas, on the other hand,
invites people to browse ideas and their connections seemingly effortlessly,
jumping forward, back, and around as needed: context turns seeing into
understanding. The best measures of this are the kinds of questions asked, the
conversations sparked, and the palpable engagement “in the room”.

The important finding is how that works: people in the Immersed software can
join the same space and choose whether to share their screens with each other. I
can sit right next to someone and we each have our own giant screens completely
invisible to the other — no obstructions to the conversation. Shared space for
communication, local independent space for productivity. It’s kind of like
hanging out with everyone and their imaginary friends (screens) and removes any
distraction from the collaboration process. It’s a real trip to experience, and
incredibly freeing.


Lip-synced, hand-tracked, webcam doppelganger

For meeting with those not in VR, or if I have a video call that needs input
rather than passive attendance, I’ll frequently use a virtual webcam to attend
by avatar. It’s sufficiently demonstrative for most team meetings, and the crew
has gotten used to me showing up as a digital facsimile. I’ll surface from VR
and use a physical webcam for anything sensitive or personal, however.

A special note about Horizon Workrooms, since that’s new and big on the scene:
they’ve done an excellent job of creating a high-end virtual meeting room that
feels and acts “like the real thing.”


Horizon workroom with a pancake-screen participant

It has a ton of promise, but… I don’t really care for the promise it’s making.
100% of what you can do in Workrooms is feasible in a physical setting, although
it would be really expensive (lots of smart hardware all over the place). But
that’s the thing: it’s imitating life within a tool that doesn’t share the same
limitations, so as a VR veteran I find it bland and claustrophobic. That’s going
to be really good for newcomers or casual users because the skeuomorphism is
familiar, making it easy to immediately orient oneself and begin working
together — and that illustrates a challenge in design vocabulary. While the
familiar can provide a safe and comfortable starting point, the real power of VR
requires training users for potentially unfamiliar use cases. Also, if you can
be anywhere, why would you want to be in a meeting room, virtual-Lake Tahoe
notwithstanding?

Exploring VR collaboration in any depth will require a separate article
(probably a series of them). The field is shifting rapidly, and there are many
options and considerations — check out the “See Also” section in the Appendix
for good references to people currently exploring this. What I would say, in
addition to the above, is that while VR isn’t perfect, combined with the
accessibility and quality-of-life benefits of Working From Anywhere it remains
my number-one choice (and if I were in-the-office or otherwise colocated? I
would still use the headset at my desk to turn my cubicle into a cavernous
expanse).




OK, BUT, LIKE… WHY?

Why go through all that effort? Other than the nerdy novelty and megafan
hyperbole, what’s the point?

Technology is all about force multiplication: do more with the same or less
effort (or cost), or do new activities or forms of work not otherwise possible.
For new strategies to be useful and adopted they need to tip the balance of some
scale. For some users, megafan street cred is enough — but there are several
other measurable advantages (these are a few things I’ve identified; your
mileage may vary).

Cost: even with my higher-capacity headset and accessories, my entire outlay is
easily competitive with a conventional multi-monitor configuration.

Focus: my increase in productivity in a distraction-free environment, measured
as total output, work quality, or duration of time on task, ranges from 20%-100%
above baseline. Compartmentalization is another major part of that focus —
remote workers know it’s important to have a physically separate space in which
to work, a boundary to prevent professional obligations from bleeding too much
into personal life. Confining the space to an otherwise invisible slice of the
metaverse keeps it well partitioned — removing the headset or quitting the app
is all it takes to fully disengage.

Flexibility: the range of adaptability provides options not otherwise practical
or possible. Screens can be ergonomically (re)positioned for a variety of
postures and spaces, making a cramped dorm room as comfortable as a corner
office. Access to my entire workplace from a hotel means I don’t sacrifice
performance when on the road (less applicable these days, but sure to show up
again in the future).

Accessibility: the visual nature of this configuration favors sighted persons,
but despite that exclusivity, it provides computer usability improvements for
those with visual deficits by making things as large as one needs, or managing
contrast and lighting. Non-traditional seating or desk (or bed) configurations
are welcome, accommodating people with a variety of mobility or physical
challenges. In my personal experience, a few months after going full-time-VR I
broke multiple bones in my foot, with no weight-bearing for four months and
major elevation requirements for the first two. I didn’t fit at my desk, and for
weeks I had to elevate my foot high enough that the recliner in the living room
was my only option. Out I went with the laptop, headset, and full command of my
multi-display setup in all its versatile glory (incidentally, for anyone with
weight-bearing restrictions from lower leg injuries, I highly recommend a
hands-free crutch — it was an enormous help during my convalescence).


Left foot, first metatarsal

Accessibility in VR is still developing, and will eventually come to include a
much wider set of features: color and other vision adaptations, motion
attenuation or magnification for motor and neuromuscular challenges, navigation
and memory aids (for both virtual and physical environments, by way of AR),
neurodiverse management tools, etc. The opportunity to so completely mediate
sensory experiences and modulate physical controls will open a range of
inclusivity surpassing the most advanced technologies currently available. This
inclusivity is about more than biological and neurological adaptation, too: many
socio-economic barriers lessen or disappear once in VR (though electricity,
bandwidth, and headset are major investments in many or most parts of the world;
there’s more work to do on that front).


WHY NOT

There are some caveats to all of this, and that may make it less suitable for
some people or circumstances.

You lose situational awareness with your visual field occluded (obviously), and
if you’re using headphones at the same time that cognitive cocoon of isolation
could be problematic or even risky — it only makes sense in an otherwise safe
and secure setting. Then there are physical demands of wearing the hardware for
long periods: getting into the optical “sweet spot”, establishing proper balance
and weight distribution (and building musculature), maintaining temperature,
cleaning and caring for equipment, and so on.

To call the current breed of productivity tools quirky or idiosyncratic would
understate how clunky and fiddly they can be. I’m willing to put up with a lot
to make this work: tolerating instability, jumping through frustrating hoops to
get everything configured just so, scouring for obscure tools or adjustments,
and pampering my carefully assembled house of cards so nothing can knock it
over. This field is rapidly changing, full of alpha and beta products, each
requiring frequent adaptation as they introduce new conventions — it’s not
enough to be familiar with VR, each product in the chain requires learning how
they approach the concept.

Even as standard conventions and stable, best-in-class products emerge, not
everyone needs this much screen space, has appropriate material to make good use
of it, or can afford to be oblivious to the outside world. Conversely, for some
it’s not good enough — lacking in resolution, bandwidth, features, hardware
quality, physical form factor, etc. Or there may be objections on principle or
political grounds, such as FaceBook’s ties to Oculus hardware (FB offers a
business-class version of the headset without their account requirements, but at
a much higher price for the same hardware — though still not so expensive as the
Vive Focus 3, which also runs this software).

I see enough promise in the current technology to be worthwhile. The patience to
wring every ounce of capability from my tools bears fruit in my daily work, and
I’m happy to keep using everything I have right now. But this is all about what
is — the future, what may be, is far more exciting. Most of the above, both good
and bad, won’t matter as future developments will be solving different problems
in far different ways.




THE FUTURE

Successful consumer technologies generally tend toward smaller, lighter, faster,
and more ubiquitous — VR has been no exception, and there’s every reason to
believe the trend will continue. New breakthroughs, features, and form factors
will emerge and integrate as seamlessly with daily life as other technologies on
which we’ve come to rely. We will inhabit a world of commingled illusion, the
simulated and substantial blending together by varying degrees.

Realism will increase (perhaps to hyperrealism) and our ability to perceive and
interact with simulated objects and settings will be indistinguishable to our
senses. Acting in simulated contexts will have physical consequences as systems
interpret and project actions into the world — telepresence will take a quantum
leap, removing limitations of time and distance. Transcending today’s drone
piloting, remote surgery, etc., we will see through remote eyes and work through
remote hands anywhere.

That’s obviously some distance into the future, and that “successful”
prerequisite is doing a lot of heavy lifting in this presumption. I’ve talked a
lot about the specifics of my current setup and experience, but looking that far
out will require shifting to the abstract. The tools will be shaped by their use
along the way — what are we going to do with them? A good way to answer this is
to look at what we have been doing with them, and see if we can figure out why;
if we understand the intent we might be able to project the destination.

Going back to the definition of “virtual”, the driving force in its evolution
has been a kind of communal neuroprosthesis — which is just a fancy made-up way
of saying “a shared mind.” Humans have been augmenting memory by distributing it
to one another, combining ideas (processing power) for problem-solving, and in
essence crowdsourcing civilization itself. Along the way, we added entertainment
value through those same channels, but the core tenet remains: externalize
aspects of cognition and exchange them with our neighbors. I’m not going to try
to explore the value, politics, and economics of what we share, but the sharing
itself is key.

Inventing better ways of getting working knowledge from one person or group to
another can increase the resilience and capacity of those groups — likewise for
motivation, directed attention, etc. Enriching that transmission with
better-suited techniques and technologies is naturally rewarded with faster and
more potent exchange: increased fidelity and reduced barriers to entry make it
easier to get things in/out of our collective heads.

“Increased fidelity” doesn’t just mean verisimilitude: making things seem more
“real” has its place, but isn’t everything when it comes to literally sharing
ideas. For that, we need alignment with natural thought processes: communicating
the same way we think, losing less in translation. And it turns out, one of the
best ways humans think is with space.

Memory Palaces, or the method of loci, is perhaps the most effective mnemonic
device ever created. Dr. Lynne Kelly has proposed that pre-literate cultures
maintained their encyclopedic knowledge by combining a similar approach with
physical, ceremonial settings: using and shaping their surroundings to aid the
memorization process. But this extends well beyond strong memories and tribal
lore — we readily do versions of this today in our personal spaces, often as a
means improving or delegating executive functions: we put things in places as a
prompt to take action, we accumulate related tasks and materials together to
prepare over time, plan future activities, and so on. We use the very
environment around us to think:

> Reliance on and off-loading of mental storage and work to such external
> devices massively boosts the storage capacity and complexity of information
> while effectively guiding individuals’ behavior toward their goals. Those
> goals can exist at much further distances across space and time than was the
> case using just the internal mental means of representing information.
> 
> Barkley, Russell A. Executive Functions: What They Are, How They Work, and Why
> They Evolved. Guilford, 2020, p. 112

VR and its eventual successors give us access to vastly more complex means of
“off-loading mental storage and work” by providing infinite malleable space, and
assigning more meaning and capability to the actions we take within it. That
meaning will be rich and potent: practitioners of sign languages, such as ASL,
already imbue their personal space with all kinds of nuance, demonstrating
relationships, temporality, distance, magnitude, and other aspects through the
physical placement and expression of word forms incidental to the actual words.
Space itself takes on meaning through indexing, a kind of spatial pronoun.

Our very psyches will become entwined with the worlds we shape, using lessons
from psychodrama et al to bind subconscious meaning to externalized objects and
elements. We will walk through our own minds and consciously sculpt ourselves.

Interleaving all of this with our physical environments and enabling truly
shared experiences will lead to a kind of collaborative cognition: thinking
together.

If that’s where we’re headed, then the green field, blue ocean opportunity right
now is human-centric information theory development — using neuroscience to
combine spatial navigation and cognition as a starting point, and building from
there to shape the terrain and guide the creation of language, tools, and
systems to bring that vision to life.

As I said before, I don’t insist on my version of the future — but I do love the
idea of using technology to cultivate mutual understanding, unify people, and
propel their creativity.




APPENDIX


DISCLOSURE

I receive no compensation or sponsorship related to the products, services, or
entities mentioned herein, nor use affiliate links.


FREQUENT QUESTIONS AND SKEPTICISMS

Are you an Immersed employee (or a shill)? No. I’m just a geek who uses their
stuff to do my stuff — they’re nice folks though, and have built a decent
community. I recommend stopping by their Discord sometime. I wrote this article
independently, without compensation, and agreed to let Immersed host it on their
blog.

This will never be mainstream. Is that a question? That doesn’t seem like a
question. It’s perhaps somewhat short-sighted and overlooks the growing number
of contributors to the field as well as Immersed’s own success in their most
recent fundraising. I do agree that “this” will probably not be mainstream in
its current incarnation, but it’s the foundation of what it will become, and
that will be mainstream.

Nobody needs that much space. Yeah, and 640K ought to be enough for anyone. Me?
I do highly contextual work, with multiple work orders and their histories open,
supporting reference documentation, API specifications, several areas of code
(and calls in the stack), tests, logs, databases, and GUIs — plus Slack,
Spotify, clock, calendar, and camera feeds. I tend to only look at 25% of that
at once, but everything is within a comfortable glance without tabbing between
windows. Protecting that context and augmenting my working memory maintains my
flow.

Motion sickness: Yes, VR-induced nausea is a real thing. And no, of the many
people who have demoed my rig none have become nauseated. Well-tracked VR in a
stationary setting (or at least one where visual and vestibular systems remain
in agreement) eliminates most potential of simulator-sickness. High frame rates
and low lag take care of most everything else.

Is it disorienting? In the context of changing environments, or taking the
headset off after long work sessions, no — there’s no disorientation. Adapting
to the new environment is immediate, like moving between rooms, and since the
focal length in the headset matches regular human vision there’s no acclimating
or adjustment. Though I will say, taking it off to find that night has fallen
can be a touch startling, but hyperfocal nerds at their computers have always
had to deal with that.

Why use Oculus hardware vs others? Specifically, with more capable headsets out
there, why settle for what’s largely considered an expensive toy?

 * Foremost, capability — it runs the productivity software I like, and there
   are very few that do that.
 * Convenience — it’s readily available, easy to work with, and versatile
   (portability, configuration, interoperability, etc.).
 * Cost — it’s by far the most affordable headset in its class, and while I have
   a tendency to be lavish with my gadgetry I’m still a cheapskate: I love a
   good deal and a favorably skewed cost/benefit ratio even more.
 * Critical mass — Oculus’s large market share means ease of replacement,
   available accessories, software library, and longevity for the product’s
   service life. They’re going to hang around to make more, and I won’t be stuck
   with something useless.


OTHER PROGRAMS

A short list of other VR programs I use by category.

Productivity: I’ve looked at almost everything out there, and always come back
to Immersed as the most capable for my needs. Not that there isn’t value in the
competition, and it’s encouraging that so many people are working in this space,
but so far it’s been the best for me. The fine folks over at Five by Five did an
amazing write-up on their experience in product selection that mirrors many of
my own findings.

Collaboration

 * BigScreen
 * Walkabout Mini Golf (or TopGolf)
 * MultiBrush (OSS successor to TiltBrush)

Meditation

 * Calm Place
 * Supernatural
 * MultiBrush

Exercise

 * Supernatural
 * Beatsaber
 * Pistol Whip
 * Synth Riders
 * The Thrill of the Fight

Creativity

 * Gravity Sketch
 * MultiBrush

Entertainment

 * Too many to list, but: Onward is my number-1 jam, followed by BigScreen


SEE ALSO

 * XR for work
   - https://www.youtube.com/c/XR4WORK
   - https://www.facebook.com/groups/xr4work
 * Five by Five:
   - https://www.vrfor.work
   - https://fivebyfiveglobal.com




SIGN UP TO DISCOVER HUMAN STORIES THAT DEEPEN YOUR UNDERSTANDING OF THE WORLD.


FREE



Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.


Sign up for free


MEMBERSHIP



Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app


Try for 5 $/month
VR
Productivity
Experiment
WFH
Future Of Work


2.3K

2.3K

19


Follow


PUBLISHED IN IMMERSED

607 Followers
·Last published Apr 5, 2024

VR Offices: Spawn 5 Virtual Screens in your VR workspace, or collab with your
remote team!

Follow
Follow


WRITTEN BY PAUL TOMLINSON

199 Followers
·20 Following

Happy technologist, physicist, and family man, with a penchant for VR and
neuropsychology. https://www.linkedin.com/in/paultomlinson/

Follow



MORE FROM PAUL TOMLINSON AND IMMERSED

In

MURAL XR

by

Paul Tomlinson


DESIGNING FOR COLLABORATION IN VR


Q&A WITH ACCENTURE AND MURAL

Sep 19, 2022
12
2



In

Immersed

by

Renji Bijoy


SPLIT-SCREEN YOUR VIRTUAL WEBCAM IN 10 STEPS!


AN EASY WAY TO MAKE YOUR NON-VR TEAM FEEL MORE COMFORTABLE WITH YOU WORKING IN
VR!

Mar 15, 2023
65



In

Immersed

by

Joe Bernardi


TEXTURE ATLASING: AN INSIDE LOOK AT OPTIMIZING 3D WORLDS!


COMBINE THOSE TEXTURES AND GET A PERFORMANCE BOOST

Jan 24, 2018
170
1



Paul Tomlinson


GOING INTO YOUR MIND


WE USED VIRTUAL REALITY TO ENTER SOMEONE’S MIND — AND FIX SOMETHING THAT WAS
BROKEN.

May 16, 2022
96


See all from Paul Tomlinson
See all from Immersed



RECOMMENDED FROM MEDIUM

Mark Manson




40 LIFE LESSONS I KNOW AT 40 (THAT I WISH I KNEW AT 20)


TODAY IS MY 40TH BIRTHDAY.

Sep 23
26K
555



In

Level Up Coding

by

Alexander Nguyen


THE RESUME THAT GOT A SOFTWARE ENGINEER A $300,000 JOB AT GOOGLE.


1-PAGE. WELL-FORMATTED.


Jun 1
25K
506




LISTS


OUR FAVORITE PRODUCTIVITY ADVICE

9 stories·734 saves


PRODUCTIVITY 101

20 stories·2563 saves


PRODUCTIVITY

242 stories·613 saves


A GUIDE TO CHOOSING, PLANNING, AND ACHIEVING PERSONAL GOALS

13 stories·2157 saves


In

Stackademic

by

Abdur Rahman


PYTHON IS NO MORE THE KING OF DATA SCIENCE


5 REASONS WHY PYTHON IS LOSING ITS CROWN


Oct 23
6K
26



Harendra


HOW I AM USING A LIFETIME 100% FREE SERVER


GET A SERVER WITH 24 GB RAM + 4 CPU + 200 GB STORAGE + ALWAYS FREE


Oct 26
4.7K
56



In

DataDrivenInvestor

by

Austin Starks


I USED OPENAI’S O1 MODEL TO DEVELOP A TRADING STRATEGY. IT IS DESTROYING THE
MARKET


IT LITERALLY TOOK ONE TRY. I WAS SHOCKED.


Sep 15
6.2K
154



F. Perry Wilson, MD MSCE




HOW OLD IS YOUR BODY? STAND ON ONE LEG AND FIND OUT


ACCORDING TO NEW RESEARCH, THE TIME YOU CAN STAND ON ONE LEG IS THE BEST MARKER
OF PHYSICAL AGING.


Oct 23
11.8K
255


See more recommendations

Help

Status

About

Careers

Press

Blog

Privacy

Terms

Text to speech

Teams

To make Medium work, we log user data. By using Medium, you agree to our Privacy
Policy, including cookie policy.