doc-ok.org Open in urlscan Pro
169.237.102.68  Public Scan

URL: http://doc-ok.org/
Submission: On April 03 via api from US — Scanned from DE

Form analysis 5 forms found in the DOM

GET http://doc-ok.org/

<form method="get" id="searchform" action="http://doc-ok.org/">
  <label for="s" class="assistive-text">Search</label>
  <input type="text" class="field" name="s" id="s" placeholder="Search">
  <input type="submit" class="submit" name="submit" id="searchsubmit" value="Search">
</form>

GET http://doc-ok.org/

<form method="get" id="searchform" action="http://doc-ok.org/">
  <label for="s" class="assistive-text">Search</label>
  <input type="text" class="field" name="s" id="s" placeholder="Search">
  <input type="submit" class="submit" name="submit" id="searchsubmit" value="Search">
</form>

POST #

<form action="#" method="post" accept-charset="utf-8" id="subscribe-blog-blog_subscription-3">
  <div id="subscribe-text">
    <p>Enter your email address to subscribe to this blog and receive notifications of new posts by email.</p>
  </div>
  <p id="subscribe-email">
    <label id="jetpack-subscribe-label" class="screen-reader-text" for="subscribe-field-blog_subscription-3"> Email Address </label>
    <input type="email" name="email" required="required" class="required" value="" id="subscribe-field-blog_subscription-3" placeholder="Email Address">
  </p>
  <p id="subscribe-submit">
    <input type="hidden" name="action" value="subscribe">
    <input type="hidden" name="source" value="http://doc-ok.org/">
    <input type="hidden" name="sub-type" value="widget">
    <input type="hidden" name="redirect_fragment" value="blog_subscription-3">
    <button type="submit" name="jetpack_subscriptions_widget"> Subscribe </button>
  </p>
</form>

POST http://doc-ok.org/wp-login.php

<form method="post" action="http://doc-ok.org/wp-login.php" class="bbp-login-form">
  <fieldset class="bbp-form">
    <legend>Log In</legend>
    <div class="bbp-username">
      <label for="user_login">Username: </label>
      <input type="text" name="log" value="" size="20" maxlength="100" id="user_login" autocomplete="off">
    </div>
    <div class="bbp-password">
      <label for="user_pass">Password: </label>
      <input type="password" name="pwd" value="" size="20" id="user_pass" autocomplete="off">
    </div>
    <div class="bbp-remember-me">
      <input type="checkbox" name="rememberme" value="forever" id="rememberme">
      <label for="rememberme">Keep me signed in</label>
    </div>
    <div class="bbp-submit-wrapper">
      <button type="submit" name="user-submit" id="user-submit" class="button submit user-submit">Log In</button>
      <input type="hidden" name="user-cookie" value="1">
      <input type="hidden" id="bbp_redirect_to" name="redirect_to" value="http://doc-ok.org/"><input type="hidden" id="_wpnonce" name="_wpnonce" value="177ff4e33a"><input type="hidden" name="_wp_http_referer" value="/">
    </div>
    <div class="bbp-login-links">
      <a href="http://doc-ok.org/?page_id=2030" title="Register" class="bbp-register-link">Register</a>
      <a href="http://doc-ok.org/?page_id=2032" title="Lost Password" class="bbp-lostpass-link">Lost Password</a>
    </div>
  </fieldset>
</form>

POST http://doc-ok.org/wp-login.php

<form method="post" action="http://doc-ok.org/wp-login.php" class="bbp-login-form">
  <fieldset class="bbp-form">
    <legend>Log In</legend>
    <div class="bbp-username">
      <label for="user_login">Username: </label>
      <input type="text" name="log" value="" size="20" maxlength="100" id="user_login" autocomplete="off">
    </div>
    <div class="bbp-password">
      <label for="user_pass">Password: </label>
      <input type="password" name="pwd" value="" size="20" id="user_pass" autocomplete="off">
    </div>
    <div class="bbp-remember-me">
      <input type="checkbox" name="rememberme" value="forever" id="rememberme">
      <label for="rememberme">Keep me signed in</label>
    </div>
    <div class="bbp-submit-wrapper">
      <button type="submit" name="user-submit" id="user-submit" class="button submit user-submit">Log In</button>
      <input type="hidden" name="user-cookie" value="1">
      <input type="hidden" id="bbp_redirect_to" name="redirect_to" value="http://doc-ok.org/"><input type="hidden" id="_wpnonce" name="_wpnonce" value="177ff4e33a"><input type="hidden" name="_wp_http_referer" value="/">
    </div>
    <div class="bbp-login-links">
      <a href="http://doc-ok.org/?page_id=2030" title="Register" class="bbp-register-link">Register</a>
      <a href="http://doc-ok.org/?page_id=2032" title="Lost Password" class="bbp-lostpass-link">Lost Password</a>
    </div>
  </fieldset>
</form>

Text Content

DOC-OK.ORG


A DEVELOPER'S PERSPECTIVE ON IMMERSIVE 3D COMPUTER GRAPHICS

Search


MAIN MENU

Skip to primary content
Skip to secondary content
 * Home
 * About This Blog
 * Picture Gallery
 * Support Forums
 * Register
 * Recover Password


POST NAVIGATION

← Older posts



WHY DESKTOP LINUX SUCKS

Posted on September 19, 2023 by okreylos
1

Now that’s a clickbaity title, you might say, but it’s actually the title of a
video I watched the other day:



Linus Torvalds is a fun speaker, so go watch the video. I’ll wait here.

That was the bait, now here is the switch: I don’t think that Linux sucks as a
desktop. I have been using Linux as my desktop computing environment since SGI
IRIX stopped being a thing, so maybe in 2001, and it’s fine. There have been
advances, there have been serious setbacks (Gnome 3, anyone?), but overall it
lets me do what I need to do and otherwise doesn’t try to get in my way. I even
gave Mac OS X an honest shot when I bought a Macbook Pro in 2008, but it just
felt really constraining, so I wiped it after about a month and installed Linux
instead and never regretted it. I have a partition with Windows 10 in it on my
home computer’s hard drive, but I can’t remember the last time I booted into it.

Side note: How fun it is to have a dual-boot Windows 10 partition to play video
games! Hey, I have an hour of free time, let’s play something quick. Okay, shut
down Linux, boot into Windows, no problem. Oh, I haven’t booted into Windows in
a few weeks, so there is an OS update that needs to be installed. Oh, I can’t
skip this and have to wait for it to complete before I can log in. Oh, it took
30 minutes to install the OS update. Well, I guess I’ll switch back over to
Linux and try playing a game again in a few weeks or so. At which point there
will be another OS update and the cycle repeats.


THE HORRORS OF DISTRIBUTING LINUX SOFTWARE

But back to the topic at hand. The reason I’m linking this video is that Mr.
Torvalds talks about the difficulty of distributing desktop software for Linux,
and on that I agree with him 100%. I have created and am maintaining several
Linux-exclusive software packages that are used by a significant number of
non-technical people, the Augmented Reality Sandbox being the main one. It’s a
pretty big piece of software comprising three components: the Vrui VR toolkit,
the Kinect 3D video capture package, and the AR Sandbox application itself. Vrui
is a general-purpose VR toolkit that covers the gamut from tracking 6-DOF input
devices and reprojecting and distortion-correcting rendered images onto an HMD’s
display to high-level user interaction and UI widgets. It does a lot more than
that, too. The main point is that Vrui is system software, and is therefore
deeply tied into the operating system, and relies on a large number of system
libraries and interfaces. Which means that packaging and releasing it in binary
form is an absolute nightmare. This video spoke to me.

Continue reading →


SHARE THIS:

 * 
 * 
 * 
 * More
 * 

 * Share
 * 
 * 
 * Save
 * 


LIKE THIS:

Like Loading...
Posted in Programming | Tagged Distribution, Package management, Release | 1
Reply


NOW THIS IS SOME EXCEPTIONAL CODE

Posted on September 14, 2023 by okreylos
1

I have been re-writing large chunks of Vrui recently, primarily to support a new
Vulkan-based HMD graphics driver that will warp and distortion-correct rendered
application image frames to an HMD operating in “direct mode,” i.e., without
being managed by the window manager. Yes, I know I’m several years late to that
particular party. 🙂

While I was doing that, which involved staring at a lot of old code for extended
periods, I also cleaned up some things that had been bugging me for a long time.
Specifically, error handling. I like descriptive error messages, because I find
they make it easier to pin-point problems encountered by users of my software
who are not themselves programmers, like, say, people who install an AR Sandbox
at their location. I like it when an error message tells me what went wrong, and
where it went wrong. Something like “I’m currently in method Z of class Y in
namespace X, and I can’t open requested file A because of operating system error
B.” In other words, I want error messages tagged with a location like “X::Y::Z,”
and with parameters like a file name or OS error code. I also want to use
exceptions, obviously. Unfortunately, C++’s standard exception classes don’t
have methods to create exception objects with parameters, so, a very long time
ago, I decided to roll my own.

Continue reading →


SHARE THIS:

 * 
 * 
 * 
 * More
 * 

 * Share
 * 
 * 
 * Save
 * 


LIKE THIS:

Like Loading...
Posted in Programming | Tagged C++, Exception handling, g++ | 1 Reply


A NEW AR SANDBOX SUPPORT FORUM

Posted on March 31, 2022 by okreylos
5

Apparently, the AR Sandbox is still a thing and going strong after ten years,
with over 850 registered installations world-wide according to the AR Sandbox
World Map. There was a lull in new installations and community activity during
the initial COVID-19 lockdowns, but things are picking up again, and with that I
am seeing an increasing amount of requests for help arriving in my personal
email.

An AR Sandbox.

The old AR Sandbox support forum, which was quite active and significantly
reduced my support load, not only by allowing me to answer common questions only
once instead of dozens of times, but also by community members directly helping
each other, unfortunately went down due to hardware problems a good while ago,
and there is currently no avenue of getting it back up.

So I decided to create a new AR Sandbox support forum on this here web site, as
a hopefully temporary replacement. I was not able to move over any of the old
forum content due to not having access to the original database files, which is
a major pity because there was a ton of helpful stuff on there. I am hoping that
the new forum will accumulate its own set of helpful stuff quickly, and if/when
I migrate the forum to a permanent location, I will be able to move all content
because I have full access to this web site’s code and database. So here’s
hoping.

This is the first forum on this web site, so I hope that things will work right
from the start; if not, we’ll figure out how to fix it. Please be patient.

And as a quick reminder: These are the only official AR Sandbox installation
instructions. Accept no substitutes.


SHARE THIS:

 * 
 * 
 * 
 * More
 * 

 * Share
 * 
 * 
 * Save
 * 


LIKE THIS:

Like Loading...
Posted in How-To, VR Applications, VR Hardware, VR Methods, VR Software | Tagged
AR sandbox | 5 Replies


WE’RE BACK!

Posted on February 4, 2022 by okreylos
Reply

Wow, it’s been a while.

The server (by which I mean the physical mid-size tower PC, see Figure 1) that
used to run this blog, and was stashed in a server room in my old building on UC
Davis campus, went down in June 2021 due to a brief power outage, and I never
got around to turning it back on due to the COVID-related campus lock-down.

Figure 1: The old server, which had been running doc-ok.org for about ten years.
Your eyes do not deceive you: that is a GeForce GTX 280 in there. Truly cutting
edge!

I finally remembered to ask the CS department’s IT support staff to pull it out
of that server room a few days ago, and have been migrating this site to a new,
actually virtual, server since then. And here we are! There’s still a lot of
maintenance to do, such as upgrading all the hideously outdated platform
packages, but at least the old content is back for the time being.

In other news, I myself moved from the Department of Earth & Planetary Sciences
to the UC Davis DataLab around the same time this site went down, and recently
finished setting up VRoom!, DataLab’s new multi-user VR space. There will be a
detailed post about that soon. There has been a lot of movement on Vrui’s
collaboration infrastructure as well, and there were some exciting adventures in
Lighthouse tracking.


SHARE THIS:

 * 
 * 
 * 
 * More
 * 

 * Share
 * 
 * 
 * Save
 * 


LIKE THIS:

Like Loading...
Posted in Uncategorized | Leave a reply


ARE MATH TEXTBOOKS WRITTEN BY PEOPLE WHO HATE MATH?

Posted on September 17, 2020 by okreylos
4

Now that I’m basically home-schooling my daughter due to The Lockdown, I’m
realizing how ridiculous math textbooks and workbooks are. Who writes these
things / creates these problem sets? Today’s homework assignment had these
nuggets in it:

“Kelly subtracted 2.3 from 20 and got 17.7. Explain why this answer is
reasonable.”

The obvious answer is “because it is correct.” But that would get the student
zero points. The expected (I assume) answer is about number sense / estimation,
e.g., “If I subtract 2 from 20 I get 18, but I have to subtract a little bit
more, and 17.7 is a little bit less than 18, so 17.7 is a reasonable answer.”
Now my issue with this problem is that the actual arithmetic is so simple that
it is arguably easier to do just do it than it is to go the estimation route.
The problem sets the students up for failure, and undercuts the point of the
unit: that estimation is a valuable tool. A better problem would have used
numbers with more digits to hint that the students were supposed to estimate the
result instead of calculating it, and to show that estimation saves time and
effort.

“At a local swim meet, the second-place swimmer of the 100-m freestyle had a
time of 9.33 sec. …”

This one made me laugh out loud, and I’m not even a sports fan who follows
swimming. But even I know that swimming is a lot slower than running, and upon
checking, I found that the world record for the 100m freestyle is 46.91 seconds.
Who was competing in this “local swim meet?” Aquaman? My issue here is that the
problem creator failed to understand the reason for using this type of word
problem: reinforcing the important notion that math is important in the real
world. But by choosing these laughable numbers, the creator not only undercut
that notion, but created exactly the opposite impression in the students: that
math has no relationship to the real world.



And from today’s section of the textbook, this table:

LocationRainfall amount in a
typical year (in inches)Macon, GA 45Boise, ID 12.19Caribou, ME 37.44Springfield,
MO 44.97

Followed by this question: “What is the typical yearly rainfall for all four
cities?” The book expects 139.6 inches as the answer, but that answer makes no
sense. Rainfall amounts measured in inches can not be added up between multiple
locations, because they are ratios, specifically volume of rain per area. How is
that supposed to work? Stacking the four cities on top of each other? As in the
previous example, this problem undercuts the goal of showing that math has a
relationship to the real world. These students, being in fifth grade, wouldn’t
necessarily realize the issue with this problem, but it really makes me think
whether the person creating this example has advanced beyond fifth grade. Or,
even worse, if that person is actively trying to create the impression that math
is just some numbers game that happens in a vacuum. If so, good job.

My daughter was actually stumped by this last one, having no idea what the book
meant by “typical yearly rainfall for all four cities,” and I had to explain to
her that the question makes no sense, and reassure her that math is important,
even if the math textbook goes out of its way to teach the students that math is
frustrating, incomprehensible, and has no point. Again, good job, textbook
writers.

In violation of Betteridge’s Law, I will answer the question posed in this
post’s headline with a resounding “YES!“


SHARE THIS:

 * 
 * 
 * 
 * More
 * 

 * Share
 * 
 * 
 * Save
 * 


LIKE THIS:

Like Loading...
Posted in Bellyaching | Tagged "Thanks for nothing!", Math, Textbooks | 4
Replies


IDLE HANDS ETC. ETC.

Posted on May 22, 2020 by okreylos
Reply

A friendly redditor sent me this link to a popular post on /r/funny yesterday
(see Figure 1 for the picture). I might have mentioned before how it was that
exact scene in the original Star Wars movie that got me into 3D computer
graphics and later VR, so it got me thinking how that particular shot would have
looked like if the miniature ILM used to film the trench run scene had not been
flat, but exhibited the proper scaled curvature of the Death Star.

Figure 1: Death Star and trench from attack scene in A New Hope, showing the
flat miniature that was used to shoot the scene. Source.

Two hours and 153 lines of code later, here are a couple of images which are
hopefully true to scale. I used 160km as the Death Star’s diameter, based on its
Wookiepedia entry (Wikipedia states 120km, but I’m siding with the bigger nerds
here), and I assumed the meridian trench’s width and depth to be 50m, based on
the size of an X-Wing fighter and shot compositions from the movie.

Side note: I don’t know how common this misconception is, but the trench
featured in the trench run scenes is not the equatorial trench prominently
visible in Figure 1. That one holds massive hangars (as seen in the scene where
the Millennium Falcon is tractor-beamed into the Death Star) and is vastly
larger than the actual trench, with is a meridian (north-south facing) trench on
the Death Star’s northern hemisphere, as clearly visible on-screen during the
pre-attack briefing (but then, who ever pays attention in briefings).

The images in Figures 2-6 are 3840×2160 pixels. Right-click and select “View
Image” to see them at full size.

Figure 2: Meridian trench on spherical Death Star, approx. 12.5m above trench
floor.Horizon distance: 1.4km. Figure 3: Meridian trench on spherical Death
Star, approx. 25m above trench floor.Horizon distance: 2km. Figure 4: Meridian
trench on spherical Death Star, approx. 37.5m above trench floor.Horizon
distance: 2.5km. Figure 5: Meridian trench on spherical Death Star, precisely at
Death Star’s surface.Horizon distance: 2.8km. Figure 6: Meridian trench on
spherical Death Star, approx. 100m above Death Star’s surface.Horizon distance:
4.9km.

As can be seen from Figures 2-6, the difference between the flat miniature used
in the movie, and the spherical model I used, is relatively minor, but
noticeable — ignoring the glaring lack of greebles in my model, obviously. I
noticed the lack of curvature for the first time while re-watching A New Hope
when the prequels came out, but can’t say I ever cared. Still, this was a good
opportunity for some recreational coding.


SHARE THIS:

 * 
 * 
 * 
 * More
 * 

 * Share
 * 
 * 
 * Save
 * 


LIKE THIS:

Like Loading...
Posted in Programming, Scientific Visualization | Tagged Death Star, Horizon,
Star Wars | Leave a reply


IS TCP REALLY THAT SLOW?

Posted on April 21, 2020 by okreylos
7

I’m still working on Vrui’s second-generation collaboration / tele-presence
infrastructure (which is coming along nicely, thankyouverymuch), and I also
recently started working with another group of researchers who are trying to
achieve similar goals, but have some issues with their own home-grown network
system, which is based on Open Sound Control (OSC). I did some background
research on OSC this morning, and ran into several instances of an old pet peeve
of mine: the relative performance of UDP vs TCP. Actually, I was trying to find
out whether OSC communicates over UDP or TCP, and whether there is a way to
choose between those at run-time, but most sources that turned up were about
performance (it turns out OSC simply doesn’t do TCP).

Here are some quotes from one article I found: “I was initially hoping to use
UDP because latency is important…” “I haven’t been able to fully test using TCP
yet, but I’m hopeful that the trade-off in latency won’t be too bad.”

Here are quotes from another article: “UDP has it’s [sic] uses. It’s relatively
fast (compared with TCP/IP).” “TCP/IP would be a poor substitute [for UDP], with
it’s [sic] latency and error-checking and resend-on-fail…” “[UDP] can be
broadcast across an entire network easily.” “Repeat that for multiple players
sharing a game, and you’ve got a pretty slow, unresponsive game. Compared to
TCP/IP then UDP is fast.” “For UDP’s strengths as a high-volume, high-speed
transport layer…” “Sending data via TCP/IP has an ‘overhead’ but at least you
know your data has reached its destination.” “… if the response time [over TCP]
was as much as a few hundred milliseconds, the end result would be no
different!”

Continue reading →


SHARE THIS:

 * 
 * 
 * 
 * More
 * 

 * Share
 * 
 * 
 * Save
 * 


LIKE THIS:

Like Loading...
Posted in Uncategorized, VR Methods, VR Software | Tagged Bandwidth, ICMP,
Latency, Networking, TCP, UDP | 7 Replies


A QUESTION ABOUT VR HEADSET RESOLUTION

Posted on April 16, 2020 by okreylos
3

I received a question via reddit a few moments ago, and I think the answer might
be of general interest, so I decided to answer it here:

“Would you happen to know the effective or perceived resolution of the [Valve
Index headset] when viewing a 50″ virtual screen from say.. 5 feet away? Do you
think its equivalent to a 50″ 1080p tv from 5 ft away yet? I was also wondering
why when I look at close up objects on the index that I can see basically no
screen door effect, but when looking into the distance at the sky then suddenly
the sde becomes very noticeable.”

Okay, so that’s actually two questions. Let’s start with the first one, and do
the math.

The first thing we have to figure out is the resolution of a 50″ 1080p TV from 5
feet away. That’s pretty straightforward: a 1080p TV has 1920 pixels
horizontally and 1080 pixels vertically. Meaning, it has √(19202 + 10802) =
2202.9 pixels along the diagonal, and – assuming the pixels are square – a pixel
size of 50″/2202.9 = 0.0227″. Next we have to figure out the angle subtended by
one of those pixels, when seen from 5 feet away. That’s α =
tan-1(0.0227″/(5⋅12″)) = 0.0217°. Inverting that number yields the TV’s
resolution as 46.14 pixels/°.

Figuring out a VR headset’s resolution is more complex, and I still haven’t
measured a Valve Index, but I estimate its resolution in the forward direction
somewhere around 15 pixels/°. That means the resolution of the hypothetical 50″
TV, viewed from 5 feet away, is approximately three times as high as the
resolution of a Valve Index. The interested reader can simulate the perceived
resolution of a VR headset of known resolution by following the steps in this
article.

The second question is about screen-door effect (SDE). As shown in Figure 1, SDE
is a high-frequency grid superimposed over a low-frequency (low-resolution)
pixel grid, which makes it so noticeable and annoying. But why does it become
less noticeable or even disappear when viewing virtual objects that are close to
the viewer? That’s vergence-accommodation conflict rearing its typically ugly,
but in this case beneficial, head. When viewing a close-by virtual object, the
viewer’s eyes accommodate to focus on a close distance, but the virtual image
shown by the VR headset is still at its fixed distance, somewhere around 1.5‒2m
away depending on headset model. Meaning, the image will be somewhat blurred,
and SDE, being a high-frequency signal, will be affected much more than the
lower-frequency actual image signal.

Figure 1: Low resolution vs. screen-door effect (SDE). (Right-click and “view
image” to see in full resolution.)




SHARE THIS:

 * 
 * 
 * 
 * More
 * 

 * Share
 * 
 * 
 * Save
 * 


LIKE THIS:

Like Loading...
Posted in VR Hardware | Tagged Resolution, Screen-door effect | 3 Replies


A CLARIFICATION ABOUT “BLACK SMEAR”

Posted on December 29, 2019 by okreylos
18

Here’s another frequently-asked question about VR headsets, or at least those
that use LED-based displays:

> Why does my headset show dark grey when it’s supposed to show black? Shouldn’t
> LED displays be able to show perfect blacks?

I addressed this in detail a long time ago, but the question keeps popping up,
and it is often answered like the following: “LED display pixels have a memory
effect when they are turned off completely, which causes ‘black smear.’ This can
be avoided by never turning them off completely.”

Unfortunately, that answer is mostly wrong. LED display pixels do have a memory
effect (for reasons too deep to get into right now), but it is not due to being
turned off completely. The obvious counter argument is that, in the
low-persistence displays used in all LED-based headsets, all display pixels are
completely turned off for around 90% of the time anyway, no matter how brightly
they are turned on during their short duty cycle. That’s what “low persistence”
means. So having them completely turned off during their 1ms or so duty cycles
as well won’t suddenly cause a memory effect.

The real answer is mathematics. In a slightly simplified model, the memory
effect of LED displays has the following structure: if some pixel is set to
brightness b1 in one frame, and set to brightness b2 in the next frame, it will
only “move” by a certain fraction of the difference, i.e., its resulting
effective brightness in the next frame will not be b2 = b1 + (b2 – b1), but b2‘
= b1 + (b2 – b1)⋅s, where s, the “smear factor,” is a number between zero and
one (it’s usually around 0.9 or so).

For example, if b1 was 0.1 (let’s measure brightness from 0 = completely off to
1 = fully lit), b2 is 0.7, and s = 0.8, then the pixel’s effective brightness in
frame 2 is b2‘ = 0.1 + (0.7 – 0.1)⋅0.8 = 0.58, so too dark by 17%. This
manifests as a darkening of bright objects that move into previously dark areas
(“black smear”). The opposite holds, too: if the pixel’s original brightness was
b1 = 0.7, and its new intended brightness is b2 = 0.1, its effective new
brightness is b2‘ = 0.7 + (0.1 – 0.7)⋅0.8 = 0.22, so too bright by 120%(!). This
manifests as bright trails following bright objects moving over dark backgrounds
(“white smear”).

The solution to black and white smear is to “overdrive” pixels from one frame to
the next. If a pixel’s old brightness is b1, and its intended new brightness is
b2, instead of setting the pixel to b2, it is set to an “overdrive brightness”
bo calculated by solving the smear formula for value b2, where b2‘ is now the
intended brightness: bo = (b2 – b1)/s + b1.

Let’s work through the two examples I used above: First, from dark to bright: b1
= 0.1, b2 = 0.7, and s = 0.8. That yields bo = (0.7 – 0.1)/0.8 + 0.1 = 0.85.
Plugging bo = 0.85 into the smear formula as b2 yields b2‘ = 0.1 + (0.85 –
0.1)⋅0.8 = 0.7, as intended. Second, going from bright to dark: b1 = 0.7, b2 =
0.1, and s = 0.8 yields bo = (0.1 – 0.7)/0.8 + 0.7 = -0.05. Oops. In order to
force a pixel that had brightness 0.7 on one frame to brightness 0.1 on the next
frame, we would need to set the pixel’s brightness to a negative value. But that
can’t be done, because pixel brightness values are limited to the interval [0,
1]. Ay, there’s the rub.

This is a fundamental issue, but there’s a workaround. If the range of intended
pixel brightness values is limited from the full range of [0, 1] to the range
[bmin, bmax], such that going from bmin to bmax will yield an overdrive
brightness bo = 1, and going from bmax to bmin will yield an overdrive
brightness bo = 0, then black and white smear can be fully corrected. The price
for this workaround is paid on both ends of the range: the high brightness
values (bmax, 1] can’t be used, meaning the display is a tad darker than
physically possible (a negligible issue with bright LEDs), and the low
brightness values [0, bmin) can’t be used, which is a bigger problem because it
significantly reduces contrast ratio, which is a big selling point of LED
displays in the first place, and means that surfaces intended to be completely
black, such as night skies, will show up as dark grey.

Let’s close by working out bmin and bmax, which only depend on the smear factor
s and can be derived from the two directions of the overdrive formula: 1 = (bmax
– bmin)/s + bmin and 0 = (bmin – bmax)/s + bmax. Solving yields bmin = (1 –
s)/(2 – s) and bmax = 1/(2 – s). Checking these results by calculating the
overdrive values to go from bmin to bmax, which should be 1, and from bmax to
bmin, which should be 0, is left as an exercise to the reader.

In a realistic example, using a smear factor of 0.9, the usable brightness range
works out to [0.09, 0.91], meaning the darkest the display can be is 9% grey.


SHARE THIS:

 * 
 * 
 * 
 * More
 * 

 * Share
 * 
 * 
 * Save
 * 


LIKE THIS:

Like Loading...
Posted in VR Hardware | Tagged Black smear, OLED | 18 Replies


QUANTITATIVE COMPARISON OF VR HEADSET FIELDS OF VIEW

Posted on December 29, 2019 by okreylos
2

Although I’ve taken many through-the-lens pictures of several common VR headsets
with a calibrated wide-angle camera, until recently I was still struggling how
to compare the resulting fields of view (FoV) quantitatively, how to put them in
context, and how to visualize them appropriately. When trying to answer
questions like “which VR headset has a bigger FoV?” or “by how much is headset
A’s FoV bigger than headset B’s?” or “how does headset C’s FoV compare to the
field of vision of the naked human eye?”, the basic question is: how does one
even measure field of view, in a way that is fair and allows comparison across a
wide range of possible sizes. Does one report a single angle, and how does one
measure it? Across the “diagonal” of the field of view? What if the field of
view is not a rectangle? Does one report a pair of angles, say
horizontal⨉vertical? Again, what if the field of view is not a rectangle?

Then, if FoV is measured either as a single angle or a pair of angles, how does
one compare different FoVs fairly? If one headset has 100° FoV, and another has
110°, does the latter show 10% more of a virtual 3D environment? What if one has
100°⨉100° and another has 110°⨉110°, does the latter show 21% more?

To find a reasonable answer, let’s go back to the basics: what does FoV actually
measure? The general idea is that FoV measures how much of a virtual 3D
environment a user can see at any given instant, meaning, without moving their
head. A larger FoV value should mean that a user can see more, and, ideally, an
FoV value that is twice as large should mean that a user can see twice as much.

Now, what does it mean that “something can be seen?” We can see something if
light from that something reaches our eye, enters the eye through the cornea,
pupil, and lens, and finally hits the retina. In principle, light travels
towards our eyes from all possible directions, but only some of those directions
end up on the retina due to various obstructions (we can’t see behind our heads,
for example). So a reasonable measure of field of view (for one eye) would be
the total number of different 3D directions from which light reaches that eye’s
retina. The problem is that there is an infinite number of different directions
from which light can arrive, so simple counting does not work.

Continue reading →


SHARE THIS:

 * 
 * 
 * 
 * More
 * 

 * Share
 * 
 * 
 * Save
 * 


LIKE THIS:

Like Loading...
Posted in Scientific Visualization, VR Hardware, VR Methods | Tagged
Field-of-view, HTC Vive Pro, Oculus Rift, PlayStation VR | 2 Replies


POST NAVIGATION

← Older posts

Search


RECENT POSTS

 * Why Desktop Linux Sucks
 * Now This Is Some Exceptional Code
 * A New AR Sandbox Support Forum
 * We’re back!
 * Are Math Textbooks Written by People who Hate Math?


RECENT COMMENTS

 * Romher Quilantang on Now This Is Some Exceptional Code
 * Romher Quilantang on About This Blog
 * John on Why Desktop Linux Sucks
 * Srsly on A New AR Sandbox Support Forum
 * olman011 on A New AR Sandbox Support Forum


ARCHIVES

Archives Select Month September 2023 March 2022 February 2022 September 2020 May
2020 April 2020 December 2019 July 2019 January 2019 December 2018 May 2018
February 2018 November 2017 September 2017 August 2017 July 2017 February 2017
January 2017 November 2016 October 2016 June 2016 May 2016 April 2016 March 2016
February 2016 January 2016 December 2015 November 2015 October 2015 August 2015
July 2015 May 2015 March 2015 February 2015 January 2015 December 2014 October
2014 September 2014 July 2014 June 2014 May 2014 April 2014 March 2014 September
2013 August 2013 July 2013 June 2013 May 2013 April 2013 March 2013 February
2013 January 2013 December 2012 October 2012 September 2012 August 2012


CATEGORIES

 * Bellyaching
 * How-To
 * Programming
 * Scientific Visualization
 * Uncategorized
 * VR Applications
 * VR Hardware
 * VR Methods
 * VR Software


META

 * Register
 * Log in
 * Entries feed
 * Comments feed
 * WordPress.org


SUBSCRIBE TO BLOG VIA EMAIL

Enter your email address to subscribe to this blog and receive notifications of
new posts by email.

Email Address

Subscribe


LOGIN

Log In
Username:
Password:
Keep me signed in
Log In
Register Lost Password


LOGIN

Log In
Username:
Password:
Keep me signed in
Log In
Register Lost Password
Proudly powered by WordPress


%d bloggers like this: