scobleizer.blog Open in urlscan Pro
192.0.78.24  Public Scan

Submitted URL: http://scobleizer.com/
Effective URL: https://scobleizer.blog/
Submission: On June 18 via manual — Scanned from DE

Form analysis 2 forms found in the DOM

POST https://subscribe.wordpress.com

<form method="post" action="https://subscribe.wordpress.com" accept-charset="utf-8" style="display: none;">
  <div class="actnbr-follow-count">Join 211 other subscribers</div>
  <div>
    <input type="email" name="email" placeholder="Enter your email address" class="actnbr-email-field" aria-label="Enter your email address">
  </div>
  <input type="hidden" name="action" value="subscribe">
  <input type="hidden" name="blog_id" value="130368462">
  <input type="hidden" name="source" value="https://scobleizer.blog/">
  <input type="hidden" name="sub-type" value="actionbar-follow">
  <input type="hidden" id="_wpnonce" name="_wpnonce" value="f98528c99b">
  <div class="actnbr-button-wrap">
    <button type="submit" value="Sign me up"> Sign me up </button>
  </div>
</form>

<form id="jp-carousel-comment-form">
  <label for="jp-carousel-comment-form-comment-field" class="screen-reader-text">Write a Comment...</label>
  <textarea name="comment" class="jp-carousel-comment-form-field jp-carousel-comment-form-textarea" id="jp-carousel-comment-form-comment-field" placeholder="Write a Comment..."></textarea>
  <div id="jp-carousel-comment-form-submit-and-info-wrapper">
    <div id="jp-carousel-comment-form-commenting-as">
      <fieldset>
        <label for="jp-carousel-comment-form-email-field">Email (Required)</label>
        <input type="text" name="email" class="jp-carousel-comment-form-field jp-carousel-comment-form-text-field" id="jp-carousel-comment-form-email-field">
      </fieldset>
      <fieldset>
        <label for="jp-carousel-comment-form-author-field">Name (Required)</label>
        <input type="text" name="author" class="jp-carousel-comment-form-field jp-carousel-comment-form-text-field" id="jp-carousel-comment-form-author-field">
      </fieldset>
      <fieldset>
        <label for="jp-carousel-comment-form-url-field">Website</label>
        <input type="text" name="url" class="jp-carousel-comment-form-field jp-carousel-comment-form-text-field" id="jp-carousel-comment-form-url-field">
      </fieldset>
    </div>
    <input type="submit" name="submit" class="jp-carousel-comment-form-button" id="jp-carousel-comment-form-button-submit" value="Post Comment">
  </div>
</form>

Text Content

Skip to content


SCOBLEIZER

Spatial Computing strategies.


ANNOUNCEMENT AND WHY THE TESLA HUMANOID ROBOT MATTERS

Irena Cronin and I are seeing huge shifts coming as autonomous vehicles get to
the point where they are driving around San Francisco without humans. We
recently started comparing notes and we are seeing the same trends. 

So, today we are announcing that I am rejoining Infinite Retina as Chief
Strategy Officer. We are bringing new understanding to entrepreneurs and product
strategists on Augmented Reality and everything related to it, including AI and
Computer Vision. Here is our first analysis (located at: [url]) on why we should
all be paying attention to what is happening in humanoid robots and consumer
electronics, which include autonomous vehicles that are now arriving to people’s
garages and, soon, Augmented Reality devices from Apple and others.

Tomorrow Elon Musk will step on stage and show us the latest AI and robotics. We
think this is a much more important announcement than most people are expecting
and here’s an analysis of just how deeply Optimus (Tesla’s humanoid robot), and
other humanoid robots, will change all of our homes. 

The last time Irena and I collaborated, we wrote a book, Infinite Retina, that
Qualcomm’s head of Augmented and Virtual Reality, Hugo Swart, reviewed as a
“must read.” This time, in addition to consulting, Irena and I are doing new
analyses in the form of a paid product on Augmented Reality topics that we will
offer on a monthly basis. One that loves Augmented Reality devices and automated
electric cars, and other products that make life more fun and better. 

Tesla Robot: Consumer Strategy 2028

“Knock knock.”

“Who is there?”

“Your pizza delivery robot. If you invite me in, I can set up your table and do
other tasks.”

It will be the first time a product introduces itself to consumers at their
front doors and once inside will bring a wholesale change to all the brands
inside. Most of them will go away. The robot will – over the years – replace
“old brands” with “new brands” that do the same thing, but better. It’ll even
change the showerheads to new models to save energy and water. 

Skeptics are right to point out this won’t happen soon. But by 2028 we expect
such a robot will be in people’s homes and the Robotaxi (think of Uber without a
human driver) will demand the inclusion of a humanoid robot that can do things
like deliver dinner or groceries.

Tesla tomorrow will give us a taste of how advanced its robotics program is and
how likely we are to get a humanoid robot that helps us at home in five years or
less, along with seeing how well it can learn to do new jobs in the factory
first. It also could explain the business model and why many Tesla owners will
want a robot in their home (it could be a key piece of the RoboTaxi network –
plugging in cars to charge them and get them back on the road).



There will be other insights, too. 

The catalyst to write this analysis is that we are both seeing signs of a
changing consumer, due to Spatial Computing technologies like autonomous
vehicles, Augmented Reality, and, particularly, robots.

If you buy into the premise that we are about to see changes in the technologies
that go into robots – the AI, the electric motors, the sensor arrays, and in
how, even, humans are living – then you will accept that the person who is
interacting with the robot will change that person from deciding on the brand of
soap used in the home, for instance, to letting the robot decide. In our
research we’ve found that humans will accept these kinds of changes faster than
most consumer products companies believe they will.

These changes go far beyond showerheads or the soap brand you use to wash your
clothes, though.

It brings with it a bunch of new technologies that could disrupt even Apple,
Google, or Amazon, but soon will start bringing service after service to your
home. 

The robot brings other robots. (The autonomous vehicle, er, a robot, will bring
the humanoid robot to your home, which will bring other, more specialized robots
in. This turns everything into a service).

That statement alone brings radical shifts to the economy. 

Why hasn’t this happened yet?

 1. Until now robots were too expensive to be used for general consumer uses.
 2. No distribution or business model existed to entice homeowners to afford a
    fairly expensive new machine. How many homes can afford to pay $50,000 for
    one?
 3. The AI or software that controls robots was also very expensive and
    specialized. A robot at Ford’s plant in Detroit puts windshields into trucks
    every minute. But it can’t fold laundry. The humanoid robot could do both
    tasks, which points to similar changes coming to workplaces and factories.
    Our writing here focuses more on the consumer changes, but our previous book
    covered both and we bet the same will be true of this newsletter in the
    future.

All three issues holding back humanoid robots are going away at a pretty fast
rate. 

All of the technologies that go into a humanoid robot are coming down in price
at a pretty constant rate and are also becoming more capable at about the same
rate, too, so you get an exponential improvement on the number of things a robot
can do over time, There are already many robots that do things from vacuum
floors to clean windows to ones that pick weeds out of your garden. Plus the
efficiency of the computers and motors that would drive its hands and legs is
getting better over time, so we can now see it doing real work for hours on one
charge.

Back to the autonomous vehicle, and its role in turning everything into a
service, which is that it will bring other robots to the home.

Once cars start driving without a human in the car, something that GM’s Cruise,
Waymo (spun out of Google), and others are already doing in San Francisco,
California and Phoenix, Arizona, then the car can bring other robots AND let the
robots be shared among a number of different houses, which defrays their cost.

This piece – how to get robots into homes – is what previous robotic companies,
now out of business, like Willow Garage or Giant AI were missing. What is that?
How to get the robots to be paid for. A $50,000 robot isn’t an expense many can
afford, even in richer neighborhoods. The autonomous vehicle unlocks a new
business model of turning everything into a service and sharing the robot’s cost
amongst many homes. 



Autonomous vehicles will, alone, bring an upheaval as consumers move away from
owning cars and toward “transportation as a service.” What does that mean? The
best example today is Uber. You pull out your phone and you order a car. You pay
for what you use. If you only take one trip a month to the local shopping mall,
you’ll pay $20. Far less than the $400 a month a new Toyota costs. 

The humanoid robot could do the same. Could do your laundry for $100 a week,
then move next door, where it could do the same for your neighbor, collecting
another $100, and so on and so forth. And if it can do laundry, it can do a lot
more in the home or even your business.

When you add autonomous vehicles, humanoid robots, and other major technology
shifts like Augmented Reality and virtual beings, that will arrive in 2023, you
see not just an economic upheaval but an almost complete change to what it means
to be human. 

The last time we (Irena Cronin and Robert Scoble) studied the market together
the result, The Infinite Retina, earned a “must read” review from Qualcomm’s
head of augmented and virtual reality products, Hugo Swart (Qualcomm makes the
chips inside everyone’s headsets other than Apple). We reconnected recently
after realizing that we were both seeing the same trends from different points
of view that very few others were seeing, or studying.

Why now?

There are multiple autonomous vehicle companies now driving around without
humans. Yes, not many cities yet, but that will change. 

That, alone, sets up deep changes to economies around the world as more
passenger miles, shipping, and other services change from human driven to AI
driven. When it also brings humanoid robots into the home, while Apple brings
Augmented Reality to the home at the same time, we see something far more
profound happening than we saw when we wrote The Infinite Retina two years ago. 

Welcome to the “everything as a service” world and stay tuned to insights from
both of us. 

Why now? Because Tesla is updating the status of their Optimus humanoid robot
and possibly demonstrating an early version of it on September 30, 2022.

And, yes, the Optimus will push the doorbell button instead of knocking, if you
have one.

Life with Tesla Optimus

The first Tesla Cybertrucks will already be a few years old when Tesla’s
humanoid robot arrives to the first waves of consumers, but what will it do when
it arrives in 2028?

Well, first of all, we need to talk about the Cybertruck. By 2028 it will be
driving around most cities without a human in it, along with other vehicles
Tesla makes. When that happens the necessary pre-conditions for humanoid robots
will be here. One will walk off the production line and jump into a waiting
Cybertruck, which will bring the humanoid robot to people’s homes. Others will
go into crates to be shipped around the world to both manufacturing and home
users. Once released from their crates they will be able to hop into a Tesla and
other vehicles.



How many more years after that will you see Tesla robots everywhere in Western
society? 2030? Certainly by 2035. 

They will help you load things into your truck at your local Home Depot, loading
even heavy sheets of sheetrock. In fact, Home Depot could order many humanoid
robots for each store. Such a store would quickly become nicer than Lowe’s, if
Lowe’s doesn’t also have the same robots.

Which leads to a lesson: every business soon will have to change deeply to
attract partnerships with Tesla and others who will want to compete with Tesla
as we move into the “Everything as a Service” world. More on how we see
businesses changing later.

Let’s go back to that original pizza delivery. What needs to happen to make that
happen?

 1. The robot has to be made.
 2. The AI has to be capable enough to go into, say, a Round Table pizza
    restaurant, and be able to talk with the humans there who are behind the
    counter — “Hi, I’m here to pick up two large pepperoni pizzas for Irena
    Cronin.”
 3. The robot has to be able to get to Round Table, get out of the vehicle, walk
    over any obstacle like mud, grass, dog poop, curbs, sand, stairs, etc., and
    get to both the counter at Round Table as well as the front door of your
    home while carrying the pizzas in a thermal pouch to keep them  piping hot.

If it just did that, it would unlock the business model. But, the robot also has
to be programmed to interact with people. So, it has to understand people
deeply, and, even, have a personality to get to its fullest potential.

Why? Trust.

Would you trust a robot that just did what you told it with no personality? Not
as much as if it talked to you in a human way, and, even, entertained you.
Adrian Kaehler discovered this while running the Giant AI company (now out of
business, but it was working with factory owners to build a humanoid robot run
by AI, just like Tesla is). He discovered that when they made their robot look,
and act, more like a human that people accepted it more readily than their
earlier prototypes that just looked like a machine with hands.

Trust will soon be the most important score companies track about itself and its
products/services. 

Consumers’ attitudes toward computers doing intimate things with them, like
cooking in the kitchen together, will soon deeply change due to what the
autonomous vehicle will impact them.


Turns out once you trust a computer to drive you around the world, you change as
a consumer – you become far more likely to let an AI run your life after that,
since you realize that the AI doesn’t kill you while driving you around. After
that, trusting a company to have a robot in your home doesn’t seem nearly as
far-fetched a proposition as before you put your life in a robot’s hands
(autonomous vehicles are technically robots, too).

So, what will you trust your humanoid robot to do? What will its day be like?

Well, laundry, dishes, shopping, cleaning, gardening, security, maintenance,
and, even, saving your life. Future robots will be able to perform CPR on you,
saving your life if you have a heart attack in your home. It can call 911 while
it is doing that too. The operator might not even realize he or she is talking
to a computer. “Hi, I’m a Tesla Optimus calling on behalf of Mary Smith and I’m
currently performing CPR on her, and she is showing symptoms of having a heart
attack. She has a pulse, but it is a weak one.”



Not every day will be so dramatic as saving a life for the robot. 

In fact, the first robots we see will usually be pretty simplistic, at first.
What is the low hanging fruit it will pick first? Deliveries! Yes, your first
humanoid robot will probably arrive at your door with some pizzas or groceries. 

Take a Ride in the Future

A truck is rolling up with your pizza delivery. It is the first thing the
humanoid robot will do as a service. Why? If you can’t deliver pizza you can’t
deliver anything else. So it will be how many people have their first encounter
with a humanoid robot. 

You are watching from your home’s front window as a human-looking robot hops out
of the passenger seat, walks to the back of the truck, and grabs a heated and
insulated bag of pizza from the cargo area and starts walking up to your house. 

You had heard they were coming, from TikTok videos from other neighborhoods. You
knew they could talk with you, show you things on its screen, and that they can
ring your doorbell, but so far, even though they moved gracefully and quickly,
they couldn’t yet enter the home. This was just an introduction to the robot.
Even so, the experience is so different and unique that people record their
first meetings on their glasses and phones and share them on social media or
through direct messages to family members and friends. “It’s here.”

“Hello Irena Cronin, we have two pizzas for you. Here is what they looked like
when they were put into the box.” (The robot’s face turns to a screen where it
shows both pizza photos being put into the box).

“Hope you like your order. Today I can’t come into your home, but starting next
week, we can do some common tasks in the home for a low monthly price, but we’ll
do the first six months for free. Washing dishes and doing your laundry. Plus we
can monitor your home for you, making it safer and more efficient. All if you
want, of course. If you do, download the “Tesla Robot” app to your Augmented
Reality glasses (a barcode appears on the robot’s face). It has been a pleasure
serving you and your family. You can call me anytime with the Tesla app. Thank
you.”

“It’s weird talking to a robot who just handed me pizza.”

“We get that a lot. Hey, it’s weird for us too! We had to figure out how to get
here without ever being here before.”

The AI inside the robot has everything published on the Internet, and quite a
few other data sources to pull from in milliseconds. A conversational AI is
already planning out potential things the robot can say to you in response to
what you say to it. It knows how likely you are to laugh at its jokes before it
tells you one. If you laugh earlier or harder than usual, that will be noted in
a database about your humor preferences.

But let’s not get into the fun and games yet. The first robot is there to serve
a business and make a profit, not just tell you jokes. The first business is the
pizza delivery service.

It will be followed by thousands of services, all controlled by you as long as
you are talking to either the Tesla app on your glasses, or the same on one of
your older tablets, phones, or computers. As long as you are within earshot of
the Tesla Optimus and as soon as it verifies your identity, which usually is
done before you even start talking, you also have complete control of it.
Particularly if you own a Tesla vehicle, since you already are running the Tesla
app full time to control your vehicle. If you are an owner, you have a virtual
robot in your Augmented Reality headset that looks, talks, walks, exactly like
the real robot. It can walk next to you, you will think you have a real robot
walking alongside of you. At least until you say something like “Hey, Tesla, can
you change my robot into an elephant?” If you have Augmented Reality glasses on,
why, yes it can!



To make a business, there are a lot of boring steps that have to happen before
the robot walks up to your door and knocks on the door or rings your doorbell.
Things like, the thing had to walk into a Round Table pizza shop, wait in line
like a human, and then introduce itself to the person behind the counter, and
ask for the pizza for Irena. 

Also, when it is walking from the curb or parking space to your front door, it
has to navigate many different things. Some people have stairs. Some people’s
front doors are across weird bridges, some made out of rock and wood, others
even rope. We have visited homes all around the world, in China, India, Israel,
South Africa, and many European countries, along with homes in Canada and Mexico
and have seen this. 

Yet others might require walking across some dirt to get to the front door, or
navigating past a security guard keeping people who aren’t residents from
entering an elevator to the Penthouse Suites. And we haven’t even talked about
snow or ice that such a robot would need to navigate without dropping the
pizza. 

That, alone, will require huge computer science efforts that cost many billions.
Many of those billions have already been spent by teams building autonomous
vehicles at places like Google and its Waymo spinout, Apple, Tesla, Mercedes
Benz, Amazon’s Zoox, Nuro, GM’s Cruise, Aurora, Wayve, or a few others. But
moving a robot through a chaotic environment to your front door will require
billions more. Some people live in places with huge crowds right outside their
front doors, others live in the middle of forests of trees. A robot will need to
navigate all that and interact with people along the way. Every interaction is a
potential customer so it has to be nice, funny, trustworthy, all in an attempt
to win customers. 

Just talking their way past security guards and doormen is quite a challenge for
human delivery people. Getting up to apartment 4B isn’t as easy as it looks
sometimes. Humans often have to call up a resident to validate they really
wanted a pizza delivery. The robot can do that automatically and you can be
shown on a video on its face – and if it uses one of the new 3D screens that
have been shown around the robot can actually show you what something looks like
in 3D on its screen that is on its face, including your face upstairs as you
wait for your pizza. 3D telepresence both inside and around a robot. 

The big business idea is that the robots (self-driving cars) will bring other
robots (humanoid robots) which then will bring other robots (for specialized
tasks like vacuuming, cleaning windows and probably a lot more, snowblowing,
gardening, and more). 

But for the first humanoid robot that gets into the home, there are also other
things it can do in addition to delivering pizza:

 1. Make the home and the people living there more efficient energy users.
 2. Give time back to family to do something better with.
 3. Build a buying club so bulk pricing lowers cost and improves quality of
    everyday things.
 4. Introduce new kinds of healthcare and other lifestyle services into the
    home, improving the health of everyone in the home (it can make better
    quality food, too). It can monitor your health just by walking by you.
    Imagine you run by one on your exercise routine and it cheers you on just
    like my family and a variety of strangers did while I ran marathons in high
    school.
 5. Improve the safety and security of the home (it can be a sentry on home all
    night long, noting various problems before you wake up).
 6. Make sure you stick with its service and that you don’t kick it out of your
    home. 
 7. Optimize the home, even tracking what clothes you wear by whether they
    disappeared from the home during the day.
 8. Introduce new experiences to the home. The robot could say “Hey, the robots
    are gonna watch the Beyonce concert tonight (we’ll even be part of the
    concert). You wanna come?”
 9. Introduce new bartering systems with your neighbors. Trading of food or,
    even, tools. “Hey, I’ll pay $5 to borrow a screwdriver.” The robot can
    arrange all sorts of things to be moved around the neighborhood.



Once the robot gets access to the home it can start optimizing it. Looking for
things that could be improved. It also is paying attention to the humans in the
home, and is building an internal database of things it learns about you as it
watches you. “The human Andrea Kaplan likes eating Cheerios at home at about 7
a.m. every morning.”

In the future this knowledge will make it possible to personalize everything,
particularly in relation to the robot. If you have a relationship with the
robot, even a cold business only one, it could notice you like Cheerios so it
has a bowl, spoon, and your Cheerios and milk on your dining room table waiting
for you at 7 a.m.

Of course that means it needs to open drawers and find your spoons, and open the
refrigerator and find your milk. Even if it is just doing this simple task,
isn’t it also making a database of every product next to these things too? Of
course it is and that, alone, will teach the AI a lot about your personality and
likes and, even, your belief system. Bringing massive changes to what humans
believe about privacy. 

Why? Well, imagine having a robot that comes into your house that didn’t talk to
you in a fun way. Just did the laundry silently, saying few words. Are you
likely to take it with you to your friend’s house? Or to a Home Depot to help
with picking up some things for your home improvement project? No. 

So, we predict it will talk frequently with you about various topics, and, even,
high five you at appropriate times. Why? If you feel it really knows you and
entertains you, then you will learn to trust it with, say, doing the groceries. 

It is this trust that is worth trillions of dollars as the robot takes on more
and more things around you, turning them all into services. 

First Ten Years: Owning the Delivery and Home Services Markets

Expectations for Tesla Optimus

Here are the parameters, among others, that the Tesla Optimus will need to meet
our expectations before it can operate in people’s homes, and we will be
watching the Tesla AI event for how well it can do each of these:

1. It should lift at least 55 lbs. Why that much? That is what you can pack and
check on an airline. It might need to assist someone loading such a bag into a
trunk.

2. It needs to be very quiet. Even when moving around you should never hear it,
or only when it has to open a drawer or a door. On the other hand, that might be
unnerving for people, so “Hey Tesla can you play some music while walking
around?”

3. It needs to be able to communicate, via voice and hand signals, along with a
screen on its face with humans. Switching modes to what the human prefers. For
instance, the robot could switch to sign language for a deaf customer.

4. It needs to walk fast enough to keep up with a human entering, say, a
RoundTable Pizza. Oh, heck, Boston Dynamics has robots that do parkour (jumping
off of buildings), so maybe we need a little more than just a slow walk, no?

5. It needs to be able to get into, and out of, a Tesla vehicle, including
putting on and off a seat belt. For extra credit, it could “assist” the car in
driving tasks, for instance, by using its higher resolution cameras to see
further and have better data to more accurately predict speed of oncoming
traffic.

6. It must figure out how to either knock on the door (without leaving a mark)
or ring the doorbell.



7. It must be able to carry a package of goods, such as pizzas, from the cargo
area to the front door while always keeping them horizontal. Same with a cake.
Same with eggs. Can’t break anything or drop anything.

8. It must show the beginnings of a personality with ability to entertain and
delight. In other words, it must have conversational skills that so far
computers haven’t demonstrated.
9. It must prove that it will be able to bring more services into the home than
is possible otherwise (business model of robot bringing other robots).

10. It must demonstrate that it will never hurt humans, children or animals.

We’ll also be watching for skills that will be needed in both factory work as
well as home service work. For instance, can it install a towel rack at home?
The skills it would need will be similar to putting in an electric engine into a
vehicle on an assembly line.

Why wouldn’t Tesla own the delivery and home services markets if it delivered a
humanoid robot that does all that?

Data, Data, Everywhere

Our thesis is that the biggest dataset wins a lot.

It isn’t just our thesis, either. Many strategists at many companies are trying
to find new sources of data. NVIDIA laid out the ultimate end of this strategy:
one datasystem that drives everything: robots, autonomous vehicles, Augmented
Reality, and virtual beings. 

We call this new strategy “the data hydra.” NVIDIA’s Omniverse is the best laid
out example, but others are being built at Tesla, Apple, Google, Niantic, Meta,
Bytedance, among others.

On September 20th, 2022, NVIDIA announced new features of its Omniverse. At the
heart is a simulator that lets AI teams train the system to do new things. Or
study a mistake it made by walking around the intersection where an accident
occured. 

This hydra needs the data to build a digital twin of everything. What is a
digital twin? Think of it as a very faithful digital copy of the real world. Our
factories, malls, parks, and other things will soon all have at least one copy.
In some places, like Times Square, we can see that there will be hundreds of
millions of copies. You could leave pictures or videos of your family on top of
this digital twin. And that is just the start. By 2025, Lumus, an optics company
building the displays that will be in future Augmented Reality glasses, showed
us that this digital twin will let us watch a concert in a new way. All around
our couch will be a music festival and, thanks to Spatial Audio, it’ll sound
concert level too. In some cases what people will hear in their AirPods Pro will
be better than what they will hear at a professional concert. Even a high-end
one, like Coachella. Augmented Reality headphones there “augmented” the audio,
making it better. You could turn up the bass, for instance, or remove crowd
noise or, turn down the concert to a more acceptable level. Business travelers
already know that the best noise canceling headphones block out a screaming baby
in the seat next to you. 

Adrian Kaehler, a computer vision pioneer who built for the first autonomous
vehicle at Stanford and was an early key exec at Magic Leap, started a humanoid
robotics company, Giant AI. That company failed to get enough funding. Why? If
you start analyzing any job that a robot might do, you can see that a humanoid
robot that can walk around, learning on its own, will decimate others.



Where Giant took showing the robot six or so times to “teach” the AI how to do a
task, like putting material into a machine, thanks to this data advantage, and
all that it brings, the Tesla robot will learn after one time, or will “reason”
through it. After all, Tesla’s AI can drive down a street it never has seen.
Some can joke that it will learn things from watching YouTube, but our kids are
already learning that way so the AI can too. We no longer laugh. The AI
ingestion engines at foundational models like Dall-e or Stable Diffusion ingest
hundreds of millions of images. Soon we will see these kinds of AI’s evolve into
new kinds of information interactions (we used to call this searching).

The robot might read you a poem it had generated by another AI, say, GPT-4, all
while putting away the groceries. Why? It knew you like poetry and wanted to
make you smile.

Let’s go directly to the point: after looking at how fast all the systems that
go into building a robot are improving we now believe the humanoid robot of 2030
will be so good that humans who have one in their home a lot will feel that they
are their friends and associates. If they are that good, you will bring one lots
of places to “help” you and your family. 

Tesla AI and Its Simulator Advantage

Every Tesla car that gets upgraded with its latest AI stack (it calls it FSD
Beta) ends up uploading about 30 gigabytes of data up to its neural network in
the Tesla cloud (the new version of that Tesla calls “Dojo”). 

That feeds a simulator that lets researchers “walk around” what looks like the
real world. With moving pedestrians, bikers, hundreds of cars, and more. 

It is this simulator that is one of Tesla’s many secret advantages. The
simulator shows off the advantage of having huge amounts of data generated by an
army of Tesla robots (cars) moving around the world. 

It lets AI researchers train new AI models to do new tasks. In 2021 Tesla
introduced an autotagger into the system, which brought about a huge shift in
how these systems can learn. The AI already knows hundreds of thousands of
objects in the real world and automatically tags anything that it knows well.
This speeds up the ability for the AI to start automatically learning. 

Which is where we are headed. There are plenty of examples of AI simulations and
robots that start with knowing nothing, and over time by trying thousands of
little experiments, they figure out how to walk and move on their own. 

Tesla has the advantage of being able to study humans in a new way while driving
around the real world. Already its researchers needed to train its AI models to
understand human movement. What the data from a human running, or walking, or
biking looks like. It needed to do that to properly behave around humans in the
streets. 

This research will give Tesla a lead when it comes to building humanoid robots.
It will use the simulator to train robots to do a wide variety of tasks, long
before Tesla makes a physical robot that can walk into your kitchen. 

How Does It Progress Over Ten Years?

The next ten years will see radical change due to Spatial Computing:

 1. Augmented Reality glasses are worn by the majority of people.
 2. Autonomous vehicles are everywhere on our streets.
 3. Virtual beings hang out with us all day long.
 4. Robots of all types are working all around us.
 5. Many homes now have solar AND backup batteries AND an electric vehicle
    charging station.
 6. AI systems now ingest massive amounts of data every day and can hallucinate
    back to you complex scenes, along with running everything in life. The AI
    foundation models that bring us things like Dall-e are going to look very
    quaint in a decade.



Here are our predictions for the humanoid robot specifically in 2033:

 1.  It will have much higher resolution imaging sensors (cameras, LIDARs, etc.)
     than today. By 2033, cameras on autonomous vehicles and robots will go from
     the 1K they are today to 32K. That means they can see further, and smaller,
     things. So now where it might have struggled to pick up a very small screw
     before, now it can see it without any problem. It also means a robot in an
     older autonomous vehicle will be able to “assist” the original vehicle and
     see further. 
 2.  Most tasks in the home will be turned into services by then. By then it can
     even install many consumer electronics, or even a shower rack in the
     bathroom. 
 3.  It takes over the management of the home (laundry, dishes, garbage,
     security, and monitoring and controlling all lights, appliances, vehicles,
     charging, and more). 
 4.  At homes that have an electric car charging station, the robot will meet an
     incoming vehicle and plug it in for charging. This will make the Robotaxi
     system more resilient and let it get vehicles back on the road after a
     charge.
 5.  Robots will run many businesses that only cater to the automated vehicle
     network (making food that gets delivered to people’s homes, for instance).
 6.  An “air traffic control system” that runs the transportation as a service
     that Elon Musk calls “Robotaxi”  will make sure robots and autonomous
     vehicles are sent to the right place at the right time. This is difficult
     because when there are large concerts, for instance, like Coachella, this
     control system will need to move thousands of cars from around the Western
     United States to Palm Springs to move people around there (we visited
     Uber’s effort at that festival to understand the traffic control and
     movement issues involved).
 7.  Humanoid robots used to be “janky” because they couldn’t do a wide variety
     of things well. Those days are gone – AI rapidly learned to get better.
 8.  Humanoid robots are very advanced with interacting with humans as compared
     to today. It won’t be unusual to have long, detailed conversations with
     your robot.
 9.  The network will be a lot smarter than today. Your robot will know
     everything that is happening across the world, in real time. It can read
     every single tweet. You certainly can’t. 
 10. The robot will enable new services in your home, like telepresence that
     goes way beyond what Zoom makes possible today.
 11. Automatic shopping services are common. Consumers learned to trust their
     autonomous vehicles with their lives and, so, hand over shopping to the
     robot who, from that point on, always makes sure your refrigerator has milk
     for the kids.

What really gets fun is when you mix robots, autonomous vehicles, together with
Augmented Reality glasses. That brings effects that will be hard to predict.
After all, when movie cameras and cinemas were invented how many more decades
did it take for Star Wars to show up?

But we can see people getting a few robots to come over for Friday evening with
their friends, and the robots will serve dinner and then will perform a short
skit for entertainment after dinner. You’ll wear your Augmented Reality glasses
and that will “dress up” your robot in different characters. Immersion is really
improved when your robot hands you things while inside an immersive experience.
This is how 2033 could be very weird compared to today.

The Big Picture



What are the possible larger impacts of the Tesla Optimus? With the Optimus
comes an increase in home services spending, as well as an opportunity for Tesla
to control the complete supply chain of products that Optimus uses in the home.

The increase in home services spending comes from consumers buying the services
that Optimus can do – those services that a person does not have time to do, or
just does not want to do. Optimus can serve the same kind of function that a
housekeeper or maid has, but can handle more work at the same time and for a
much longer period of time.

Additionally, Optimus can do things in the home that a housekeeper cannot, such
as run diagnostics on major appliances and gauge how they are performing and if
they are running efficiently. It could also do this for the Tesla car in a
person’s garage, as well as ready it for use in the morning which is really
useful to tired, hard-working families and professionals.

In addition to these functions, it could serve as a very energetic handyman,
plumber, housepainter, etc. Doing all these services and replacing traditional
professionals significantly changes the dynamics of the home services market.
This disruption has the potential to substantially enlarge that market due to
the efficiencies and superior attention to detail and physical strength of the
Optimus. 

In terms of how the Optimus would be made available to consumers, there would
probably be several different channels for this. One possibility would be for a
company to buy several Optimuses and rent or lease them out. Another would be
direct purchases by upper class families, and a third way could be the buying of
community Optimuses by home owners associations (HOAs), neighborhoods or cities.

In the process of its work, the Optimus will be using cleaning products and
house improvement and handyman goods. For ease and scale, Tesla has the
opportunity to make direct deals with the companies that provide these since it
would need these in bulk. In this way, Tesla could control the complete supply
chain of these products and goods for the Optimus; companies that make these
products and goods would line up to be included since the sales volume would be
so high.

While it is difficult to currently assess how big the potential market will be
for the Optimus, it would encompass a large majority of upper middle and upper
class people with families, as well as single, childless professionals, first in
the U.S.and then, in other parts of the world.

The economic impact that the Optimus brings, taking into account even a
mid-market penetration, will be significant. Why? Because the potential market
is so big. Because the Optimus can do such a wide range of tasks, it will be
relatively more efficient and will consolidate and increase the need for its
services. The Optimus can go to a home and perform many varied services during
that visit that would usually take four or five different kinds of workers.
People who would not have been enticed before for certain home services would
now take advantage of having those services done. Why not? If the Optimus can
cook, mow the lawn, paint, babysit, diagnose electrical issues and so much more,
it is very convenient for it to do many-varied tasks during any visit.

What is the impact on the workers the Optimus replaces? Yes, this has the
potential of putting many different categories of service people out of
business. Robotics and automation tend to have that effect in all kinds of areas
of life. We don’t have an answer as to what will happen to the displaced
workers, we only know that it will happen.




SHARE THIS:

 * Twitter
 * Facebook
 * 

Like Loading...
Robert Scoble Personal September 29, 2022 27 Minutes


THE FIRST TOUR OF GIANT AI’S ROBOT LAB


Visiting Giant AI

Visiting Giant AI is like getting a tour of a secret lab that shouldn’t exist
run by an eccentric genius. The kind of which we remember from “Back to the
Future.”

Adrian Kaehler is that genius. 

He built the computer vision system for the first autonomous vehicle that later
became Waymo. He played a key role in the early development of Magic Leap, an
augmented reality company that just won best of show at the industry’s biggest
gathering, AWE (for Augmented World Expo). He also wrote what many say is the
book on Computer Vision which is still used by many computer science
departments. Today he is leading the Giant AI company which is building humanoid
robots that can work on manufacturing lines, doing the same jobs humans used to
do and many new ones. Giant is invested in by Bill Gates and Khosla Ventures. 

He saw the problems long ago that robots will bring. The earlier companies’
robots were designed and built to be very precise, which means they remain
expensive today. You see many of these in factories today, they are heavy, don’t
work well with humans, have to be programmed months in advance and are hard to
retrain and don’t recover well when errors are made. Some are too dangerous to
be around like the ones in Tesla’s factory in Fremont, which has some robots in
cages to keep all humans away. 

He also saw the solution: AI that builds a new kind of operating system. One
that learns faster than anything most humans could imagine. It learns so fast
that you only need to show it a few times how to do something and it’ll do that
thing from then on. One that enables new lower-cost components to be used. Ones
that are less precise

When I watch the Universal Worker move, I can see how the tendons that make it
work create a very different, animal, sort of motion. It is kind of springy.
This would be a non-starter for a traditional robot, but the AI control, just
like with a person, manages this and makes it all work out. Dr. Kaehler tells me
that the use of this sort of tendon system is central to how the robot can be so
light and dexterous, as well as why it can be so much less expensive than
traditional robots.

 It’s the new AI that enables this new lower cost and safer approach. 

So, getting into his lab first meant a lot to me. Why? I think it means a lot to
you, too. 

It means we will have to rethink work. From scratch.

Is your happiness and income coming from you pushing a button on a machine?
Really? I worked on HP’s manufacturing line when I was a young man of 17. One of
my first jobs was working the wave soldering machine there and shoving circuit
boards into the wave, which instantly solderied the whole board. I had helped my
parents and brothers hand build hundreds of Apple II. My mom taught us to
solder. If you get good at it, like my mom was, you could do maybe a board in 30
minutes. I saw how manufacturing lines can change labor from my kitchen. My mom
worked for Hildy Licht, who got hired by Apple to take on the task since they
couldn’t make enough in its own factory. Apple cofounder Steve Wozniak, AKA
“Woz,” told me that those boards had fewer failures than the ones made in its
own factory. It also makes me Apple’s first child laborer (I was 13 at the
time).

Anyway, I never wanted to do such a job again, given how boring it was. I loved
that Wave Machine because it saved many hours of labor. I dreamed of a day when
a robot would stuff the board too. I had to do that over and over and over
again.

I wish I had a Universal Worker by Giant AI Corporation back then.



As he showed me around he was telling me what was making these robots so
special. The AI inside is next level. See, I’ve been following AI since the very
beginning.



I was the first to see Siri.

That was the first consumer AI app. I also have the first video, on YouTube, of
the first Google Self Driving Car. Long before anyone else. That was the first
AI on the road. I have been following AI innovators since the ve beginning.

This robot is using the next generation of all that.

Don’t worry, though.

You do get that we are in an exponential world, right? One where we need more
workers, not fewer. Even if Giant got a huge funding deal, for, say, a billion
valuation, it still couldn’t build enough robots to replace ANY human for MANY
years. These are to fill in the gaps for when you can’t get enough workers to
keep up with demand.

Anyway, back to the lab. Along each side I saw a row of robot prototypes for
what Giant AI is calling “the Universal Worker.” Each was being tended to by
staff, as Adrian gave me a tour he explained what each was doing. A new form of
ML that uses neural radiance fields to see – the engineers are putting finishing
touches on blog posts that will soon come going deep into the technology. In the
video Kaehler goes into some depth about what it’s doing and how it works.

Each robot had a humanoid form. Even a smile on the face. And the head moved in
a very unique way that I had never seen before. Strangely human like. Which,
Adrian says in the video embedded here, is part of its ability to learn quickly
and also get the trust of the human working aside it. It also lets it do a wider
range of jobs than otherwise. It sees the machine, or task, it is standing in
front of like we do – in 3D. And, in fact, there are many other similarities
between what runs under robots, virtual beings, autonomous vehicles, augmented
reality headsets and glasses. Kaehler is the only human that I know that has
built three of those and he says that they all are strongly connected underneath
in how they perceive the world, let others see the perceived and synthesized
world.

As you get a look around his lab, you do see that they feel like early versions
of the Tesla Autopilot system: a little rough and slow. Heck, even today, four
years later, it does 6,000 more things, but it still seems a little rough and
slow. The Universal Robots feel the same a bit to me. At least at first. Until I
started watching that this was real AI learning how to grasp and drop things. It
felt humanlike as it dropped a rod onto a machine yet another time in a row
without dropping it. 

I remember talking to the manager of the Seagate Hard Drive factory in Wuxi,
China, about why he hired so many women. Nearly his entire final assembly line
was women, highly trained too, I watched several run a scanning electron
microscope. I never will forget what he told me: “Men drop drive off line, pick
it up, put it back on line. Women don’t do that. They bring over to me and admit
fault.”

This robot was learning quickly how to recover from its mistakes. Which is how
it was designed, Adrian told me. It has grids of sensors in each finger, which
can do a new kind of “feeling” than I’d ever seen a robot use before. Each of
those sensors was being pushed and pulled by a cord going to a machine in the
belly of its humanoid form. On the end of an arm that was built from cheap
consumer processes. The hand shakes, just slightly, especially if a big forklift
goes by. 



Giant’s AI is what makes it possible to become far less expensive. It “sees” the
world in a new way, using something the AI engineers call “Neural Radiance
Fields.” A new form of 3D scenes that you can walk through. In Giant AI’s case
it moves the hands through these radiance fields, which are unlike any 3D data
structure we’ve ever seen before.

Its AI is constantly adopting and learning, which lets it figure out how to
recover from a mistake very quickly. Adrian wrote the math formula on the board
on a previous trip. It keeps pushing the hands toward the best possible outcome.
So, you can slap them and they’ll recover. Or, if an earthquake hits and it
drops your motor before it goes into the box it was supposed to put it in and
the machine shakes. It still should be able to complete the task, just like a
human would, or try to save the part, if possible, and if possible it will
report a problem. 

Anyway, at this point, you are wondering “why did you hype up Tesla’s robot so
much?” Last week I did. Those who are inside the Tesla factory tell me that
their simulator gives them an unfair advantage and will let them build a
humanoid robot that can walk around and do a variety of tasks much faster than
people are expecting. You’ll see Tesla’s robot in September as part of its AI
day announcements. Yes, hardware is hard, even if you have the best simulators,
it is getting easier.

In a way this is a David vs. Goliath kind of situation. So Giant had to focus on
a very specific, but large enough, problem: one of low-skilled workers and what
they need help with.

Which is why Giant’s Universal Robot doesn’t have legs. It isn’t a trillion
dollar company. It can’t afford to put legs on a robot that doesn’t need them. A
worker in a factory always stays in the same place and does the same job over
and over and over.

It doesn’t spy on you the way that the Tesla robot will (Giant’s AI only can
“look at” the work surface in front of it). It can’t walk around your factory
floor mapping it out, or watching workers in other parts of your plant as it
walks around.

It also doesn’t have a completed mouth, or a voice response system, or the
ability to really communicate well with other human beings the way the Tesla
robot will need to do. Which makes the Giant robot far cheaper than the Tesla
ones and it ready now, at a speed slower than human, or soon, at same speed.

That said, Kaehler is keeping up to date on the latest computer vision research
and he knows that Tesla’s will do many things Giant’s can’t, and that’s fine
with him. He doesn’t have a car company to gather data about humans in the real
world. It isn’t his goal to build a robot that can deliver pizza. Just do boring
jobs that humans need an extra set of hands to help do.

Giant AI already has orders, Adrian says, but the company needs funding to get
to the place where it can manufacture the robots themselves.

I remember visiting “Mr. China” Liam Casey. I visited him in his Shenzhen home
and he gave me a once in six thousand lifetimes tour of Shenzhen that I treasure
to this day. Then he took me on an even wilder one over his homeland of Ireland,
where he took me to a research lab that Mark Zuckerberg ended up buying.

What did Casey teach me? He had the same problem. No one would invest in his
business, even though he had customers. How did he get his orders done, I asked
him “I got them built.”

“But how? Did you have something to trade? A house, an expensive car, secret
photos, what?”

“My US Passport.”

The factory owner demanded his passport in trade for building his order. A form
of collateral I’d never heard of before. Then had Casey travel the country to
all his factories to do a certification on each. That led Casey to see the power
of databases, particularly ones for tracking supply chains. Which is why he is
Mr. China today, and makes many products in his PCH company that probably are in
your home today. He used that early research about China’s factories to become
the supply chain leader that many technology companies use to build their
products.

Giant needs the same today. A way to get the product finished and manufactured.
Capital, and lots of it, to get to where these are working hard to make
everyone’s lives better.



Tesla’s simulator has ingested a lot more than just where has the car gone. It
knows EVERYTHING about its owners. So, when an engineer wants to recreate a
street, it is amazingly real and the people will even stop to say hello or let
you check out their dog. Then you can make it rain. Or make it sunny. Over and
over and over again. 

Why is it so magical? BECAUSE OF the data the car and phone collects. A Tesla
crosses the Golden Gate Bridge every 10 seconds. No one else is even in the same
universe in data collection capabilities.

Tesla has a similar bleeding edge AI to Giant’s but Tesla’s has billions of
times more data than Giant ever will get its hands on.

However, do you just need a machine to push a button or two every minute or two
it notices a job is done or do you need Tesla’s AI and simulator that will have
to do a whole lot more? No, at least not now, because the costs will be
completely higher for the Tesla robot, which will need to walk and get in and
out of autonomous vehicles.


That said, now that I’ve seen the Giant AI and how sophisticated it is with
literally no data when compared to the Tesla system I realize that the Tesla one
must be far more advanced and started asking around.

The Tesla robot will need to get out of an autonomous vehicle and figure out how
to get a pizza up to your apartment, or to your front door, once you figure it
out by talking to so many people, like I do.

Kaehler showed me a way how Giant’s AI would do that if it had access to the
Tesla data and resources, particularly its simulator where rafts of people can
“jump into” and walk around keep training over and over teaching it to get it
right. The demos you see in the video are quaint compared to what the resources
of Tesla can generate, as impressive as they are.

Every day I’m more and more convinced I’m conservative. Either way, getting the
first look at Giant’s Universal Worker gives us a good look at the future of
work so I hope you appreciate being first to see this. I sure did. 


SHARE THIS:

 * Twitter
 * Facebook
 * 

Like Loading...
Robert Scoble Personal June 28, 2022 10 Minutes


WHEN WILL AUGMENTED REALITY GLASSES BE READY FOR CONSUMERS?

Unfortunately it has taken a lot longer to get augmented reality glasses to
consumers than I expected. A lot longer. I thought we would be wearing them all
day by now. Heck, when I got Google Glass years ago I thought I never would take
those off.

Boy, was I wrong.

Many in Silicon Valley taunt me for my previous optimism, saying “not this year,
not next year.”

That doesn’t mean they aren’t getting closer. For the past seven years I’ve been
watching Lumus, a small company in Israel, that makes the best lenses/displays
I’ve seen so far. Every two years they come and visit me and show me its latest.
Every year they get brighter, lighter, more efficient, smaller, and more.

Here, in video, is Lumus’ head of marketing showing me its latest displays and
you see just how big an improvement it has made. You can see these are getting
much closer to the size and quality that consumers will be happy wearing.



But since I have been so wrong before, I wanted to take a more sober look at
these displays and ask myself “when will consumers buy these?”

That may just be the wrong question. Unless I was working at Meta or Apple or
Snap.

Enterprise uses of these are coming right now. Just look at the revolution in
robotics that is underway. AI pioneer Adrian Kaehler has been retweeting every
amazing robot on Twitter (he is CEO of Giant AI, which makes manufacturing
robots coming over the next year) and there are dozens that work on all sorts of
production lines, not to mention do a variety of other jobs. These glasses would
be perfect for controlling, and training, all these new robots. And a variety of
other things, from training to surgery. This is why Magic Leap has a new shot at
life that I also didn’t see, due to its cord and lack of consumer experiences.

Other augmented reality companies have pivoted away from consumers and toward
enterprise uses of these glasses and devices (most notably Magic Leap and
Microsoft’s HoloLens).

Why?

Well, for instance, look at some of the limitations of even these amazing new
displays from Lumus. While they are many times brighter than, say, the Magic
Leap or HoloLens displays, and have bigger fields of view, the image does not
quite match my 4K TV, which cost me $8,000 last year.



So, consumers who want to watch TV, or particularly movies, in their glasses
will find the image quality still not as nice as a bleeding edge TV, tablet, or
phone display (although inside they are damn close). Even though augmented
reality glasses give many other advantages (like you can watch in a plane or
while walking around, something my big TV can’t do). But these are dramatically
better than they were last time I saw Lumus’ latest. White whites. Sharp text.
Bright videos and images.

The field of view, too, is 50 degrees. OK, that does match my 83-inch TV when I
am sitting on my couch (the image in the Lumus is actually bigger than my TV
slightly) but that isn’t enough to “immerse” you the way VR does. Will that
matter to consumers? I think it will, but 50 degrees is way better than what
Snap is showing in its current Spectacles. In 2024’s devices screens will be
virtualized, too, so the hard field of view numbers won’t matter nearly as much.
These are certainly better than my HoloLens’s field of view.

Also, bleeding edge TVs, like my Sony OLED, have better color and luminance
depth. What does that mean? TV and movies still look better on my TV. But that,
also, is sort of a bad comparison. My TV can’t travel with you. These displays
are pretty damn good for a variety of uses, I just wish I didn’t need to wait
until 2024 to get them.

This is why many who are working on Apple’s first device tell me it is NOT doing
see-through glasses like these for its first product. They just don’t match
consumer expectations yet (although these Lumus lenses are a lot closer than any
others I’ve seen so far).

Apple’s first device is what those of us in the industry call a “passthrough”
device and is NOT a pair of glasses like what Lumus is showing here. In other
words, you can’t see the real world through the front of the device. Unless the
device is on (Apple’s device will present a digital recreation of the room you
are in — I hear its new version of augmented reality is pretty mind blowing,
too).

Until this next generation of devices happens these glasses will mostly be used
for R&D or enterprise uses, like controlling robots or production lines, or
doing things like surgery, where field of view, brightness, etc aren’t as
important as they will be for consumers. Lumus is selling their much better
lenses to consumer-focused partners, but they don’t expect the really
interesting glasses until 2024.

I’ve been working with a variety of enterprise users and here there is a deep
hunger for better glasses. At Trimble, a construction company, for instance,
they are working on a variety of initiatives. They are using the Boston
Dynamics’ robots to map out construction sites in 3D and then using HoloLenses
to do a variety of tasks. The problem? The HoloLens only has displays that are
about 400 nits. Technical term for “dim, poor quality color, very little
readability in bright sunlight.” Lumus’ displays are 5,000. Yesterday I took
them outside and saw that they are plenty bright enough for bright environments.

Also, the HoloLens is very heavy and big compared to the glasses that Lumus and
many others are readying. The construction workers are not happy with the size
of the HoloLens, or even the Magic Leap, which has a cord down to a computer
that clips on your belt.

These enterprise users are hungry to buy a decent set of augmented reality
glasses. Lumus should help its partners get to these markets long before Meta,
Snap, or Apple figure out how to get consumers to want to buy glasses.

How will I evaluate whether the market is ready?

Let’s make a list.

1. Brightness. 2,500 nits is perfect for most enterprise uses (HoloLens is only
400 and all my clients complain about lack of brightness and visual quality).
Lumus says theirs can do 5,000, which gets close to consumer expectations. Big
improvements over the past and over competitors I’ve seen.



2. Color. The Lumus lenses are much better than others I’ve seen. Pure whites
and decent color (I could watch TV and movies in them). Enterprise is ready.
Will consumers take to these in 2024? I think so. No color fringing like I see
on my HoloLens. Much much nicer.

3. Size. The projectors in the Lumus are much smaller than they were three years
ago when I last saw Lumus’ work. Very awesome for doctors, construction workers,
production line workers, etc but still a bit too big for “RayBan” style glasses.
But I could see wearing these for hours.

4. Cost. They avoided this question, sort of, but the cost is now coming down to
enable devices that are $2,000 or less. That is acceptable for many enterprise
uses, but still too high for most consumers. That said, I’m wearing glasses that
cost $1,500 before insurance, so we are heading to consumer pricing.

5. Battery life and heat generation. Here Lumus has made big strides. They claim
devices that are running their latest projectors will be able to go for hours,
even all day, depending on how often the displays are showing information. That
is great for, say, a surgeon, using a system like the one MediView makes. They
only need displays on for a few minutes during surgery. Same for many other
enterprise uses. Most workers won’t be trying to watch live streaming video all
day long, like consumers will be. Also, they don’t heat up like others on the
market do. But for consumer uses? Not quite there yet. Consumers will want to
watch, say, CNBC all day long, along with working on screens of information.

6. Field of view. Yes, it’s better than my expensive 83-inch TV, but not by
much. Consumers will have higher expectations than just 50 degrees. Enterprise
users? Don’t care much at all. The benefits of having screens on their eyes
outweighs the lack of wrap-around screens.

7. Content. Consumers will want to do everything from edit spreadsheets to watch
TV shows and movies and play video games. All of which Lumus will never do, so
its partners will need to come up with all of that. Enterprise users are far
more focused on very specific use cases, like controlling a robot, or being able
to see data on production machinery. That’s a hard job, for sure, but a far
easier one than getting the wider range of things consumers expect done. Yes,
the Metas, Apples, Googles, Snaps, Niantics, etc, are working on all that but
they aren’t nearly ready with enough to get consumers to say “wow.”

8. Resilience. Consumers will want to wear these devices out in the rain. Will
drop them. Their kids will step on them. How do I know? All that has happened to
my glasses, which I’m forced to wear simply to see. Enterprise users are more
focused on safety and many jobs, like surgery, will not need nearly the same
kind of resilience that consumers will need.

Now, can all these problems be fixed by, say, an Apple or a Meta or a Snap?
Sure, but I bet on Apple being more aggressive and that didn’t happen. So, we
need to see how well it does next year with a launch of a bigger, heavier device
aimed at home users to see how well consumers react to augmented reality devices
on our faces.

Now, is there someone out there that has glasses ready to go sooner? Maybe, but,
let’s say NVIDIA has a pair that does a lot, will they have all the advantages
of Apple? No way. Not for a while.

This is why Mark Zuckerberg told investors that it will be “years” before
augmented reality devices make big money with consumers. Even its VR efforts,
after being out for seven years, and having a ton of content and low price of
$300, is only selling about 1.5 million units a quarter (Apple sells that many
phones in about two days).



Translation: as excited as I am about going to this week’s Augmented World Expo,
we still have a couple of years to go, at minimum. I’m bummed writing that, but
it’s better to be more realistic about the near future than optimistic.




SHARE THIS:

 * Twitter
 * Facebook
 * 

Like Loading...
Robert Scoble Personal May 31, 2022May 31, 2022 7 Minutes


POSTS NAVIGATION

Older posts
Older posts
Advertisements
Powered by wordads.co
We've received your report.

Thanks for your feedback!
Seen too often
Not relevant
Offensive
Broken
Report this adPrivacy

June 2024 M T W T F S S  12 3456789 10111213141516 17181920212223 24252627282930

« Sep    


ARCHIVES

Archives Select Month September 2022  (1) June 2022  (1) May 2022  (1) March
2022  (1) February 2022  (3) January 2022  (1) September 2021  (1) June 2021
 (1) May 2021  (1) February 2021  (1) January 2021  (1) November 2020  (1)
September 2020  (1) July 2020  (3) June 2020  (1) April 2020  (2) October 2019
 (1) August 2019  (5) July 2019  (2) June 2019  (6) May 2019  (2) April 2019
 (4) March 2019  (2) February 2019  (3) November 2018  (2) October 2018  (2)
January 2018  (1) December 2017  (5) November 2017  (12) June 2017  (1) May 2016
 (2) March 2016  (1) February 2016  (1) January 2016  (1) December 2015  (1)
August 2014  (1) May 2014  (1) January 2014  (1) November 2013  (1) August 2013
 (10) February 2013  (2) January 2013  (1) December 2012  (3) November 2012  (1)
September 2012  (4) August 2012  (4) July 2012  (4) June 2012  (1) March 2012
 (1) February 2012  (7) January 2012  (2) December 2011  (3) November 2011  (5)
October 2011  (4) September 2011  (15) August 2011  (5) July 2011  (6) June 2011
 (15) May 2011  (23) April 2011  (12) March 2011  (38) February 2011  (38)
January 2011  (23) December 2010  (26) November 2010  (32) October 2010  (17)
September 2010  (21) August 2010  (7) July 2010  (11) June 2010  (23) May 2010
 (30) April 2010  (12) March 2010  (9) February 2010  (13) January 2010  (29)
December 2009  (11) November 2009  (23) October 2009  (17) September 2009  (15)
August 2009  (7) June 2009  (10) May 2009  (5) April 2009  (23) March 2009  (19)
February 2009  (22) January 2009  (52) December 2008  (43) November 2008  (19)
October 2008  (50) September 2008  (48) August 2008  (24) July 2008  (42) June
2008  (45) May 2008  (54) April 2008  (36) March 2008  (64) February 2008  (68)
January 2008  (64) December 2007  (123) November 2007  (93) October 2007  (110)
September 2007  (87) August 2007  (59) July 2007  (168) June 2007  (118) May
2007  (154) April 2007  (132) March 2007  (81) February 2007  (151) January 2007
 (155) December 2006  (175) November 2006  (206) October 2006  (171) September
2006  (144) August 2006  (193) July 2006  (188) June 2006  (114) May 2006  (106)
April 2006  (131) March 2006  (48) February 2006  (20) January 2006  (161)
December 2005  (202) November 2005  (241) October 2005  (137)
Advertisements
Powered by wordads.co
We've received your report.

Thanks for your feedback!
Seen too often
Not relevant
Offensive
Broken
Report this adPrivacy


COPYRIGHT ROBERT SCOBLE: ALL RIGHTS RESERVED. 2022


Blog at WordPress.com.

 * Subscribe Subscribed
    * Scobleizer
      
      Join 211 other subscribers
      
      Sign me up
    * Already have a WordPress.com account? Log in now.

 * Privacy
 *  * Scobleizer
    * Customize
    * Subscribe Subscribed
    * Sign up
    * Log in
    * Report this content
    * View site in Reader
    * Manage subscriptions
    * Collapse this bar

 

Loading Comments...

 

Write a Comment...
Email (Required) Name (Required) Website

%d