Change search
Refine search result
1 - 17 of 17
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Axholt, Magnus
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Pinhole Camera Calibration in the Presence of Human Noise2011Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The research work presented in this thesis is concerned with the analysis of the human body as a calibration platform for estimation of a pinhole camera model used in Augmented Reality environments mediated through Optical See-Through Head-Mounted Display. Since the quality of the calibration ultimately depends on a subject’s ability to construct visual alignments, the research effort is initially centered around user studies investigating human-induced noise, such as postural sway and head aiming precision. Knowledge about subject behavior is then applied to a sensitivity analysis in which simulations are used to determine the impact of user noise on camera parameter estimation.

    Quantitative evaluation of the calibration procedure is challenging since the current state of the technology does not permit access to the user’s view and measurements in the image plane as seen by the user. In an attempt to circumvent this problem, researchers have previously placed a camera in the eye socket of a mannequin, and performed both calibration and evaluation using the auxiliary signal from the camera. However, such a method does not reflect the impact of human noise during the calibration stage, and the calibration is not transferable to a human as the eyepoint of the mannequin and the intended user may not coincide. The experiments performed in this thesis use human subjects for all stages of calibration and evaluation. Moreover, some of the measurable camera parameters are verified with an external reference, addressing not only calibration precision, but also accuracy.

  • 2.
    Axholt, Magnus
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Peterson, Stephen D.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ellis, Stephen
    Human Systems Integration Division NASA Ames Research Center.
    User Boresight Calibration Precision for Large-Format Head-Up Displays2008In: Proceedings of the 2008 ACM symposium on Virtual reality software and technology, New York, NY, USA: ACM , 2008, p. 141-148Conference paper (Refereed)
    Abstract [en]

    The postural sway in 24 subjects performing a boresight calibration task on a large format head-up display is studied to estimate the impact of human limits on boresight calibration precision and ultimately on static registration errors. The dependent variables, accumulated sway path and omni-directional standard deviation, are analyzed for the calibration exercise and compared against control cases where subjects are quietly standing with eyes open and eyes closed. Findings show that postural stability significantly deteriorates during boresight calibration compared to when the subject is not occupied with a visual task. Analysis over time shows that the calibration error can be reduced by 39% if calibration measurements are recorded in a three second interval at approximately 15 seconds into the calibration session as opposed to an initial reading. Furthermore parameter optimization on experiment data suggests a Weibull distribution as a possible error description and estimation for omni-directional calibration precision. This paper extends previously published preliminary analyses and the conclusions are verified with experiment data that has been corrected for subject inverted pendulum compensatory head rotation by providing a better estimate of the position of the eye. With correction the statistical findings are reinforced.

  • 3.
    Axholt, Magnus
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Peterson, Stephen D.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ellis, Stephen
    Human Systems Integration Division NASA Ames Research Center.
    User Boresight for AR Calibration: A Preliminary Analysis2008In: IEEE Virtual Reality Conference, 2008. VR '08 / [ed] Ming Lin, Anthony Steed, Carolina Cruz-Neira, Piscataway, NJ, USA: IEEE , 2008, p. 43-46Conference paper (Refereed)
    Abstract [en]

    The precision with which users can maintain boresight alignment between visual targets at different depths is recorded for 24 subjects using two different boresight targets. Subjects' normal head stability is established using their Romberg coefficients. Weibull distributions are used to describe the probabilities of the magnitude of head positional errors and the three dimensional cloud of errors is displayed by orthogonal two dimensional density plots. These data will lead to an understanding of the limits of user introduced calibration error in augmented reality systems.

  • 4.
    Axholt, Magnus
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Peterson, Stephen D.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ellis, Stephen R.
    NASA Ames Research Center.
    Visual Alignment Accuracy in Head Mounted Optical See-Through AR Displays: Distribution of Head Orientation Noise2009In: Proceedings of the Human Factors and Ergonomics Society 53rd Annual Meeting 2009, San Antonio (TX), USA: Human Factors and Ergonomics Society , 2009, p. 2024-2028Conference paper (Refereed)
    Abstract [en]

    The mitigation of registration errors is a central challenge for improving the usability of AugmentedReality systems. While the technical achievements within tracking and display technology continue toimprove the conditions for good registration, little research is directed towards understanding theuser’s visual alignment performance during the calibration process. This paper reports 12 standingsubjects’ visual alignment performance using an optical see-through head mounted display for viewingdirections varied in azimuth (0°, ±30°, ±60°) and elevation (0°, ±10°). Although viewing direction hasa statistically significant effect on the shape of the distribution, the effect is small and negligible forpractical purposes and can be approximated to a circular distribution with a standard deviation of 0.2°for all viewing directions studied in this paper. In addition to quantifying head aiming accuracy with ahead fixed cursor and illustrating the deteriorating accuracy of boresight calibration with increasingviewing direction extremity, the results are applicable for filter design determining the onset and end ofhead rotation.

  • 5.
    Axholt, Magnus
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Peterson, Stephen D.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ellis, Stephen R.
    NASA Ames Research Center.
    Visual Alignment Precision in Optical See - Through AR Displays: Implications for Potential Accuracy2009In: Proceedings of the ACM/IEEE Virtual Reality International Conference, Association for Computing Machinery (ACM), 2009Conference paper (Other academic)
    Abstract [en]

    The quality of visual registration achievable with anoptical see-through head mounted display (HMD)ultimately depends on the user’s targetingprecision. This paper presents design guidelines forcalibration procedures based on measurements ofusers’ head stability during visual alignment withreference targets. Targeting data was collected from12 standing subjects who aligned a head fixedcursor presented in a see-through HMD withbackground targets that varied in azimuth (0°, ±30°,±60°) and elevation (0°, ±10°). Their data showedthat: 1) Both position and orientation data will needto be used to establish calibrations based on nearbyreference targets since eliminating body swayeffects can improve calibration precision by a factorof 16 and eliminate apparent angular anisotropies.2) Compensation for body sway can speed thecalibration by removing the need to wait for thebody sway to abate, and 3) calibration precision canbe less than 2 arcmin even for head directionsrotated up to 60° with respect to the user’s torsoprovided body sway is corrected. Users ofAugmented Reality (AR) applications overlookinglarge distances may avoid the need to correct forbody sway by boresighting on markers at relativelylong distances, >> 10 m. These recommendationscontrast with those for heads up displays using realimages as discussed in previous papers.

  • 6.
    Axholt, Magnus
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Peterson, Stephen
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ellis, Stephen R.
    Human Systems Integration Division, NASA Ames Research Center.
    User Boresighting for AR Calibration: A Preliminary Analysis2008In: Proceedings of the IEEE Virtual Reality Conference 2008, IEEE , 2008, p. 43-46Conference paper (Refereed)
    Abstract [en]

    The precision with which users can maintain boresight alignment between visual targets at different depths is recorded for 24 subjects using two different boresight targets. Subjects' normal head stability is established using their Romberg coefficients. Weibull distributions are used to describe the probabilities of the magnitude of head positional errors and the three dimensional cloud of errors is displayed by orthogonal two dimensional density plots. These data will lead to an understanding of the limits of user introduced calibration error in augmented reality systems.

  • 7.
    Axholt, Magnus
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Skoglund, Martin A.
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    O’Connell, Stephen D.
    Swedish Air Force Combat Simulation Center at the Swedish Defence Research Agency.
    Cooper, Matthew D.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ellis, Stephen R.
    Human Systems Integration Division at NASA Ames Research Center.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Accuracy of Eyepoint Estimation in Optical See-Through Head-Mounted Displays Using the Single Point Active Alignment Method2011Conference paper (Other academic)
    Abstract [en]

    This paper studies the accuracy of the estimated eyepoint of an Optical See-Through Head-Mounted Display (OST HMD) calibrated using the Single Point Active Alignment Method (SPAAM). Quantitative evaluation of calibration procedures for OST HMDs is complicated as it is currently not possible to share the subject’s view. Temporarily replacing the subject’s eye with a camera during the calibration or evaluation stage has been proposed, but the uncertainty of a correct eyepoint estimation remains. In the experiment reported in this paper, subjects were used for all stages of calibration and the results were verified with a 3D measurement device. The nine participants constructed 25 visual alignments per calibration after which the estimated pinhole camera model was decomposed into its intrinsic and extrinsic parameters using two common methods. Unique to this experiment, compared to previous evaluations, is the measurement device used to cup the subject’s eyeball. It measures the eyepoint location relative to the head tracker, thereby establishing the calibration accuracy of the estimated eyepoint location. As the results on accuracy are expressed as individual pinhole camera parameters, rather than a compounded registration error, this paper complements  previously published work on parameter variance as the former denotes bias and the latter represents noise. Results indicate that the calibrated eyepoint is on average 5 cm away from its measured location and exhibits a vertical bias which potentially causes dipvergence for stereoscopic vision for objects located further away than 5.6 m. Lastly, this paper closes with a discussion on the suitability of the traditional pinhole camera model for OST HMD calibration.

  • 8.
    Axholt, Magnus
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Skoglund, Martin
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    O'Connell, Stephen
    Swedish Defence Research Agency.
    Cooper, Matthew
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ellis, Stephen
    NASA Ames Research Center.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Parameter Estimation Variance of the Single Point Active Alignment Method in Optical See-Through Head Mounted Display Calibration2011In: Proceedings of the IEEE Virtual Reality Conference / [ed] Michitaka Hirose, Benjamin Lok, Aditi Majumder and Dieter Schmalstieg, Piscataway, NJ, USA: IEEE , 2011, p. 27-24Conference paper (Refereed)
    Abstract [en]

    The parameter estimation variance of the Single Point Active Alignment Method (SPAAM) is studied through an experiment where 11 subjects are instructed to create alignments using an Optical See-Through Head Mounted Display (OSTHMD) such that three separate correspondence point distributions are acquired. Modeling the OSTHMD and the subject's dominant eye as a pinhole camera, findings show that a correspondence point distribution well distributed along the user's line of sight yields less variant parameter estimates. The estimated eye point location is studied in particular detail. Thefindings of the experiment are complemented with simulated datawhich show that image plane orientation is sensitive to the numberof correspondence points. The simulated data also illustrates someinteresting properties on the numerical stability of the calibrationproblem as a function of alignment noise, number of correspondencepoints, and correspondence point distribution.

  • 9.
    Axholt, Magnus
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology. Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA).
    Skoglund, Martin
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Peterson, Stephen
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology. Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA).
    Cooper, Matthew
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology. Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA).
    Schön, Thomas
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Gustafsson, Fredrik
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology. Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA).
    Ellis, Stephen
    NASA Ames Research Center, USA.
    Optical See-Through Head Mounted Display: Direct Linear Transformation Calibration Robustness in the Presence of User Alignment Noise2010Report (Other academic)
    Abstract [en]

    The correct spatial registration between virtual and real objects in optical see-through augmented reality implies accurate estimates of the user’s eyepoint relative to the location and orientation of the display surface. A common approach is to estimate the display parameters through a calibration procedure involving a subjective alignment exercise. Human postural sway and targeting precision contribute to imprecise alignments, which in turn adversely affect the display parameter estimation resulting in registration errors between virtual and real objects. The technique commonly used has its origin incomputer vision, and calibrates stationary cameras using hundreds of correspondence points collected instantaneously in one video frame where precision is limited only by pixel quantization and image blur. Subsequently the input noise level is several order of magnitudes greater when a human operator manually collects correspondence points one by one. This paper investigates the effect of human alignment noise on view parameter estimation in an optical see-through head mounted display to determine how well astandard camera calibration method performs at greater noise levels than documented in computer vision literature. Through Monte-Carlo simulations we show that it is particularly difficult to estimate the user’s eyepoint in depth, but that a greater distribution of correspondence points in depth help mitigate the effects of human alignment noise.

  • 10.
    Axholt, Magnus
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Skoglund, Martin
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Peterson, Stephen
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Cooper, Matthew
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Schön, Thomas
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Gustafsson, Fredrik
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ellis, Stephen
    NASA Ames Research Center, USA.
    Optical See-Through Head Mounted Display: Direct Linear Transformation Calibration Robustness in the Presence of User Alignment Noise2010In: Proceedings of the 54th Annual Meeting of the Human Factors and Ergonomics Society, 2010Conference paper (Refereed)
    Abstract [en]

    The correct spatial registration between virtual and real objects in optical see-through augmented reality implies accurate estimates of the user’s eyepoint relative to the location and orientation of the display surface. A common approach is to estimate the display parameters through a calibration procedure involving a subjective alignment exercise. Human postural sway and targeting precision contribute to imprecise alignments, which in turn adversely affect the display parameter estimation resulting in registration errors between virtual and real objects. The technique commonly used has its origin incomputer vision, and calibrates stationary cameras using hundreds of correspondence points collected instantaneously in one video frame where precision is limited only by pixel quantization and image blur. Subsequently the input noise level is several order of magnitudes greater when a human operator manually collects correspondence points one by one. This paper investigates the effect of human alignment noise on view parameter estimation in an optical see-through head mounted display to determine how well astandard camera calibration method performs at greater noise levels than documented in computer vision literature. Through Monte-Carlo simulations we show that it is particularly difficult to estimate the user’s eyepoint in depth, but that a greater distribution of correspondence points in depth help mitigate the effects of human alignment noise.

  • 11.
    Peterson, Stephen D.
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Axholt, Magnus
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Cooper, Matthew
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ellis, Stephen R.
    Human Systems Integration Division, NASA Ames Research Center, USA.
    Detection Thresholds for Label Motion in Visually Cluttered Displays2010In: IEEE Virtual Reality Conference (VR), 2010, Piscataway, NJ, USA: IEEE , 2010, p. 203-206Conference paper (Refereed)
    Abstract [en]

    While label placement algorithms are generally successful in managing visual clutter by preventing label overlap, they can also cause significant label movement in dynamic displays. This study investigates motion detection thresholds for various types of label movement in realistic and complex virtual environments, which can be helpful for designing less salient and disturbing algorithms. Our results show that label movement in stereoscopic depth is shown to be less noticeable than similar lateral monoscopic movement, inherent to 2D label placement algorithms. Furthermore, label movement can be introduced more readily into the visual periphery (over 15° eccentricity) because of reduced sensitivity in this region. Moreover, under the realistic viewing conditions that we used, motion of isolated labels is more easily detected than that of overlapping labels. This perhaps counterintuitive finding may be explained by visual masking due to the visual clutter arising from the label overlap. The quantitative description of the findings presented in this paper should be useful not only for label placement applications, but also for any cluttered AR or VR application in which designers wish to control the users’ visual attention, either making text labels more or less noticeable as needed.

  • 12.
    Peterson, Stephen D.
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Axholt, Magnus
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Cooper, Matthew
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ellis, Stephen R.
    NASA, Ames Research Center, USA.
    Evaluation of Alternative Label Placement Techniques in Dynamic Virtual Environments2009In: International Symposium on Smart Graphics, Berlin / Heidelberg: Springer , 2009, p. 43-55Conference paper (Refereed)
    Abstract [en]

    This paper reports on an experiment comparing label placement techniques in a dynamic virtual environment rendered on a stereoscopic display. The labeled objects are in motion, and thus labels need to continuously maintain separation for legibility. The results from our user study show that traditional label placement algorithms, which always strive for full label separation in the 2D view plane, produce motion that disturbs the user in a visual search task. Alternative algorithms maintaining separation in only one spatial dimension are rated less disturbing, even though several modifications are made to traditional algorithms for reducing the amount and salience of label motion. Maintaining depth separation of labels through stereoscopic disparity adjustments is judged theleast disturbing, while such separation yields similar user performance to traditional algorithms. These results are important in the design offuture 3D user interfaces, where disturbing or distracting motion due to object labeling should be avoided.

  • 13.
    Peterson, Stephen D.
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Axholt, Magnus
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Cooper, Matthew
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ellis, Stephen R.
    NASA Ames Research Center, USA.
    Visual Clutter Management in Augmented Reality: Effects of Three Label Separation Methods on Spatial Judgments2009In: IEEE Symposium on 3D User Interfaces (3DUI), Lafayette (LA), USA: IEEE , 2009, p. 111-118Conference paper (Refereed)
    Abstract [en]

    This paper reports an experiment comparing three label separation methods for reducing visual clutter in Augmented Reality (AR) displays. We contrasted two common methods of avoiding visual overlap by moving labels in the 2D view plane with a third that distributes overlapping labels in stereoscopic depth. The experiment measured user identification performance during spatial judgment tasks in static scenes. The threemethods were compared with a control condition in which no label separation method was employed. The results showed significant performance improvements, generally 15-30%, for all three methods over the control; however, these methods were statistically indistinguishable from each other. In-depth analysis showed significant performance degradation when the 2D view plane methods produced potentially confusing spatial correlations between labels and the markers they designate. Stereoscopically separated labels were subjectively judged harder to read than view-plane separated labels. Since measured performance was affected both by label legibility and spatial correlation of labels and their designated objects, it is likely that the improved spatial correlation of stereoscopically separated labels and their designated objects has compensated for poorer stereoscopic text legibility. Future testing with dynamic scenes is expected to more clearly distinguish the three label separation techniques.

  • 14.
    Peterson, Stephen D.
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Axholt, Magnus
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ellis, Stephen R.
    NASA Ames Research Center.
    Comparing disparity based label segregation in augmented and virtual reality2008In: ACM Symposium on Virtual Reality Software and Technology (VRST), New York, NY, USA: ACM , 2008, p. 285-286Conference paper (Refereed)
    Abstract [en]

    Recent work has shown that overlapping labels in far-field AR environments can be successfully segregated by remapping them to predefined stereoscopic depth layers. User performance was found to be optimal when setting the interlayer disparity to 5-10 arcmin. The current paper investigates to what extent this label segregation technique, label layering, is affected by important perceptual defects in AR such as registration errors and mismatches in accommodation, visual resolution and contrast. A virtual environment matched to a corresponding AR condition but lacking these problems showed a reduction in average response time by 10%. However, the performance pattern for different label layering parameters was not significantly different in the AR and VR environments, showing robustness of this label segregation technique against such perceptual issues.

  • 15.
    Peterson, Stephen D.
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Axholt, Magnus
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ellis, Stephen R.
    Human Systems Integration Division, NASA Ames Research Center.
    Label Segregation by Remapping Stereoscopic Depth in Far-Field Augmented Reality2008In: 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, 2008. ISMAR 2008. / [ed] Mark A. Livingston, Oliver Bimber, Hideo Saito, Piscataway, NJ, USA: IEEE , 2008, p. 143-152Conference paper (Refereed)
    Abstract [en]

    This paper describes a novel technique for segregating overlapping labels in stereoscopic see-through displays. The present study investigates the labeling of far-field objects, with distances ranging 100-120 m. At these distances the stereoscopic disparity difference between objects is below 1 arcmin, so labels rendered at the same distance as their corresponding objects appear as if on a flat layer in the display. This flattening is due to limitations of both display and human visual resolution. By remapping labels to predetermined depth layers on the optical path between the observer and the labeled object, an interlayer disparity ranging from 5 to 20 arcmin can be achieved for 5 overlapping labels. The present study evaluates the impact of such depth separation of superimposed layers, and found that a 5 arcmin interlayer disparity yields a significantly lower response time, over 20% on average, in a visual search task compared to correctly registering labels and objects in depth. Notably the performance does not improve when doubling the interlayer disparity to 10 arcmin and, surprisingly, the performance degrades significantly when again doubling the interlayer disparity to 20 arcmin, approximating the performance in situations with no interlayer disparity. These results confirm that our technique can be used to segregate overlapping labels in the far visual field, without the cost associated with traditional label placement algorithms.

  • 16.
    Peterson, Stephen D.
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Axholt, Magnus
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ellis, Stephen R.
    Human Systems Integration Division, NASA Ames Research Center.
    Managing Visual Clutter: A Generalized Technique for Label Segregation using Stereoscopic Disparity2008In: IEEE Virtual Reality Conference, 2008. VR '08., Los Alamitos, CA, USA: IEEE Computer Society, 2008, p. 169-176Conference paper (Refereed)
    Abstract [en]

    We present a new technique for managing visual clutter caused by overlapping labels in complex information displays. This technique, "label layering", utilizes stereoscopic disparity as a means to segregate labels in depth for increased legibility and clarity. By distributing overlapping labels in depth, we have found that selection time during a visual search task in situations with high levels of overlap is reduced by four seconds or 24%. Our data show that the depth order of the labels must be correlated with the distance order of their corresponding objects. Since a random distribution of stereoscopic disparity in contrast impairs performance, the benefit is not solely due to the disparity-based image segregation. An algorithm using our label layering technique accordingly could be an alternative to traditional label placement algorithms that avoid label overlap at the cost of distracting motion, symbology dimming or label size reduction.

  • 17.
    Peterson, Stephen D.
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Axholt, Magnus
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ellis, Stephen R.
    NASA Ames Research Center.
    Objective and Subjective Assessment of Stereoscopically Separated Labels in Augmented Reality2009In: Computers & graphics, ISSN 0097-8493, E-ISSN 1873-7684, Vol. 33, no 1, p. 23-33Article in journal (Refereed)
    Abstract [en]

    We present a new technique for managing visual clutter caused by overlapping labels in complex information displays. This technique, label layering, utilizes stereoscopic disparity as a means to segregate labels in depth for increased legibility and clarity. By distributing overlapping labels in depth, we have found that selection time during a visual search task in situations with high levels of visual overlap is reduced by 4s or 24%. Our data show that the stereoscopically based depth order of the labels must be correlated with the distance order of their corresponding objects, for practical benefits. An algorithm using our label layering technique accordingly could be an alternative to traditional label placement algorithms that avoid label overlap at the cost of distracting view plane motion, symbology dimming or label size reduction.

1 - 17 of 17
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf