Digitala Vetenskapliga Arkivet

Change search
Refine search result
2345678 201 - 250 of 3114
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 201.
    Ballerini, L.
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Bocchi, L.
    A Fractal Approach to Predict Fat Content in Meat Images2001Conference paper (Refereed)
    Abstract [en]

    Intramuscular fat content in meat influences some important meat quality

  • 202.
    Ballerini. L., Bocchi
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    L.,
    Segmentation of liver images by texture and genetic snakes2002Conference paper (Refereed)
  • 203.
    Ballerini, L.
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Bocchi, L.
    Hullberg, A.
    Determination of Pores in Pig Meat Images2002In: International Conference on Computer Vision and Graphics, Zakopane, Poland, 2002, p. 70-78Conference paper (Refereed)
    Abstract [en]

    In this paper we present an image processing application for

  • 204.
    Ballerini, L.
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Borgefors, G.
    Theory and Applications of Image Analysis at the Centre for Image Analysis2001In: 5th Korea-Germany JointWorkshop on Advanced Medical Image Processing, Seoul, Korea, 2001Conference paper (Other scientific)
  • 205.
    Ballerini, L.
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Hullberg, A.
    Determination of holes in pig meat images2002In: Proceedings SSAB'02 Symposium on Image Analysis, 2002, p. 53-56Conference paper (Other scientific)
    Abstract [en]

    In this paper we present an image processing application for

  • 206.
    Ballerini, L.
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Högberg, A.
    How Do People Choose Meat?2001Conference paper (Refereed)
    Abstract [en]

    In this paper we present a survey carried out to understand the choice of

  • 207.
    Ballerini, L.
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Högberg, A.
    Borgefors., G.
    Bylund, A.-C.
    Lindgård, A.
    Lundström, K.
    Rakotonirainy, O.
    Soussi, B.
    A Segmentation Technique to Determine Fat Content in NMR Images of Beef Meat2002In: IEEE Transactions on Nuclear Science, Vol. 49, no 1, p. 195-199Article in journal (Refereed)
    Abstract [en]

    The world of meat faces a permanent need for new methods of meat

  • 208.
    Ballerini, L.
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Högberg, A.
    Borgefors, G.
    Bylund, A.-C.
    Lindgård, A.
    Lundström, K.
    Rakotonirainy, O.
    Soussi, B.
    Testing MRI and image analysis techniques for fat quantification in meat science2000Conference paper (Refereed)
    Abstract [en]

    The world of meat faces a permanent need for new methods of meat quality

  • 209.
    Ballerini, L.
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Högberg, A.
    Lundström, K.
    Borgefors, G.
    Colour Image Analysis Technique for Measuring of Fat in Meat: An Application forthe Meat Industry2001Conference paper (Refereed)
    Abstract [en]

    Intramuscular fat content in meat influences some important meat quality

  • 210.
    Ballerini, L.
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Piazza, E.
    A picture of doctoral studies in Italy2001In: Eurodoc 2001, European Conference of Doctoral Students, Uppsala, Sweden, 2001Conference paper (Other scientific)
  • 211.
    Ballerini, L.
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Piazza, E.
    The future of Italian doctors2002In: Eurodoc 2002, European Conference of Doctoral Students, Girona, Spain, 2002Conference paper (Other scientific)
  • 212.
    Balusulapalem, Hanumat Sri Naga Sai
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology.
    Amarwani, Julie Rajkumar
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology.
    Precise Robot Navigation Between Fixed End and Starting Points - Combining GPS and Image Analysis2024Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The utilization of image analysis and object detection spans various industries, serving purposes such as anomaly detection, automated workflows, and monitoring tool wear and tear. This thesis addresses the challenge of achieving precise robot navigation between fixed start and end points by combining GPS and image analysis. The underlying motivation for tackling this issue lies in facilitating the creation of immersive videos, mainly aimed at individuals with disabilities, enabling them to virtually explore diverse locations through a compilation of shorter video clips.

     The research delves into diverse models for object detection frameworks and tools, including NVIDIA Detectnet, and YOLOv5. Through a comprehensive evaluation of their performance and accuracy, the thesis proceeds to implement a prototype system utilizing an Elegoo Smart Robot Car, a camera, a GPS module, and an embedded NVIDIA Jetson Nano system. 

    Performance metrics such as precision, recall, and map are employed to assess the models' effectiveness. The findings indicate that the system demonstrates high accuracy and speed in detection, exhibiting robustness across varying lighting conditions and camera settings

    Download full text (pdf)
    fulltext
  • 213.
    Banerjee, Avijit
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Mukherjee, Moumita
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Satpute, Sumeet Gajanan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Nikolakopoulos, George
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Resiliency in Space Autonomy: a Review2023In: Current Robotics Reports, E-ISSN 2662-4087, Vol. 4, p. 1-12Article, review/survey (Refereed)
    Abstract [en]

    Purpose of Review: The article provides an extensive overview on the resilient autonomy advances made across various missions, orbital or deep-space, that captures the current research approaches while investigating the possible future direction of resiliency in space autonomy.

    Recent Findings: In recent years, the need for several automated operations in space applications has been rising, that ranges from the following: spacecraft proximity operations, navigation and some station keeping applications, entry, decent and landing, planetary surface exploration, etc. Also, with the rise of miniaturization concepts in spacecraft, advanced missions with multiple spacecraft platforms introduce more complex behaviours and interactions within the agents, which drives the need for higher levels of autonomy and accommodating collaborative behaviour coupled with robustness to counter unforeseen uncertainties. This collective behaviour is now referred to as resiliency in autonomy. As space missions are getting more and more complex, for example applications where a platform physically interacts with non-cooperative space objects (debris) or planetary bodies coupled with hostile, unpredictable, and extreme environments, there is a rising need for resilient autonomy solutions.

    Summary: Resilience with its key attributes of robustness, redundancy and resourcefulness will lead toward new and enhanced mission paradigms of space missions.

    Download full text (pdf)
    fulltext
  • 214.
    Banerjee, Subhashis
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3.
    Strand, Robin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3.
    Lifelong Learning with Dynamic Convolutions for Glioma Segmentation from Multi-Modal MRI2023In: Medical imaging 2023 / [ed] Colliot, O Isgum, I, SPIE - International Society for Optical Engineering, 2023, Vol. 12464, article id 124643JConference paper (Refereed)
    Abstract [en]

    This paper presents a novel solution for catastrophic forgetting in lifelong learning (LL) using Dynamic Convolution Neural Network (Dy-CNN). The proposed dynamic convolution layer can adapt convolution filters by learning kernel coefficients or weights based on the input image. The suitability of the proposed Dy-CNN in a lifelong sequential learning-based scenario with multi-modal MR images is experimentally demonstrated for the segmentation of Glioma tumors from multi-modal MR images. Experimental results demonstrated the superiority of the Dy-CNN-based segmenting network in terms of learning through multi-modal MRI images and better convergence of lifelong learning-based training.

    Download full text (pdf)
    fulltext
  • 215.
    Barbalau, Antonio
    et al.
    Univ Bucharest, Romania.
    Ionescu, Radu Tudor
    Univ Bucharest, Romania; SecurifAI, Romania; MBZ Univ Artificial Intelligence, U Arab Emirates.
    Georgescu, Mariana-Iuliana
    Univ Bucharest, Romania; SecurifAI, Romania.
    Dueholm, Jacob
    Aalborg Univ, Denmark; Milestone Syst, Denmark.
    Ramachandra, Bharathkumar
    Geopipe Inc, NY 10019 USA.
    Nasrollahi, Kamal
    Aalborg Univ, Denmark; Milestone Syst, Denmark.
    Khan, Fahad
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. MBZ Univ Artificial Intelligence, U Arab Emirates.
    Moeslund, Thomas B.
    Aalborg Univ, Denmark.
    Shah, Mubarak
    Univ Cent Florida, FL 32816 USA.
    SSMTL plus plus : Revisiting self-supervised multi-task learning for video anomaly detection2023In: Computer Vision and Image Understanding, ISSN 1077-3142, E-ISSN 1090-235X, Vol. 229, article id 103656Article in journal (Refereed)
    Abstract [en]

    A self-supervised multi-task learning (SSMTL) framework for video anomaly detection was recently introduced in literature. Due to its highly accurate results, the method attracted the attention of many researchers. In this work, we revisit the self-supervised multi-task learning framework, proposing several updates to the original method. First, we study various detection methods, e.g. based on detecting high-motion regions using optical flow or background subtraction, since we believe the currently used pre-trained YOLOv3 is suboptimal, e.g. objects in motion or objects from unknown classes are never detected. Second, we modernize the 3D convolutional backbone by introducing multi-head self-attention modules, inspired by the recent success of vision transformers. As such, we alternatively introduce both 2D and 3D convolutional vision transformer (CvT) blocks. Third, in our attempt to further improve the model, we study additional self-supervised learning tasks, such as predicting segmentation maps through knowledge distillation, solving jigsaw puzzles, estimating body pose through knowledge distillation, predicting masked regions (inpainting), and adversarial learning with pseudo-anomalies. We conduct experiments to assess the performance impact of the introduced changes. Upon finding more promising configurations of the framework, dubbed SSMTL++v1 and SSMTL++v2, we extend our preliminary experiments to more data sets, demonstrating that our performance gains are consistent across all data sets. In most cases, our results on Avenue, ShanghaiTech and UBnormal raise the state-of-the-art performance bar to a new level.

  • 216.
    Barbosa, Fernando S.
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Lacerda, Bruno
    Univ Oxford, Oxford Robot Inst, Oxford, England..
    Duckworth, Paul
    Univ Oxford, Oxford Robot Inst, Oxford, England..
    Tumova, Jana
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Hawes, Nick
    Univ Oxford, Oxford Robot Inst, Oxford, England..
    Risk-Aware Motion Planning in Partially Known Environments2021In: 2021 60th IEEE  conference on decision and control (CDC), Institute of Electrical and Electronics Engineers (IEEE) , 2021, p. 5220-5226Conference paper (Refereed)
    Abstract [en]

    Recent trends envisage robots being deployed in areas deemed dangerous to humans, such as buildings with gas and radiation leaks. In such situations, the model of the underlying hazardous process might be unknown to the agent a priori, giving rise to the problem of planning for safe behaviour in partially known environments. We employ Gaussian process regression to create a probabilistic model of the hazardous process from local noisy samples. The result of this regression is then used by a risk metric, such as the Conditional Value-at-Risk, to reason about the safety at a certain state. The outcome is a risk function that can be employed in optimal motion planning problems. We demonstrate the use of the proposed function in two approaches. First is a sampling-based motion planning algorithm with an event-based trigger for online replanning. Second is an adaptation to the incremental Gaussian Process motion planner (iGPMP2), allowing it to quickly react and adapt to the environment. Both algorithms are evaluated in representative simulation scenarios, where they demonstrate the ability of avoiding high-risk areas.

  • 217. Barekatain, Mohammadamin
    et al.
    Martí Rabadán, Miquel
    KTH, School of Computer Science and Communication (CSC). Polytechnic University of Catalonia, Barcelona.
    Shih, Hsueh-Fu
    Murray, Samuel
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Nakayama, Kotaro
    Matsuo, Yutaka
    Prendinger, Helmut
    Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action Detection2017In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Institute of Electrical and Electronics Engineers (IEEE) , 2017, p. 2153-2160Conference paper (Refereed)
    Abstract [en]

    Despite significant progress in the development of human action detection datasets and algorithms, no current dataset is representative of real-world aerial view scenarios. We present Okutama-Action, a new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes. Okutama-Action features many challenges missing in current datasets, including dynamic transition of actions, significant changes in scale and aspect ratio, abrupt camera movement, as well as multi-labeled actors. As a result, our dataset is more challenging than existing ones, and will help push the field forward to enable real-world applications.

  • 218.
    Baris, Antonios
    Stockholm University, Faculty of Law, Department of Law.
    AI covers: legal notes on audio mining and voice cloning2024In: Journal of Intellectual Property Law & Practice, ISSN 1747-1532, E-ISSN 1747-1540Article in journal (Refereed)
    Abstract [en]
    • This article explores the impact of Artificial Intelligence (AI) on the music industry, particularly focusing on the case of AI-generated covers. The emergence of AI technologies has been raising concerns not just about the originality and protection of AI-generated outputs but also about the complex input and training phase of those systems.

    • The focus of this contribution is the latter, analysing the case of AI covers from the perspective of copyright and image rights. In the first part, an overview of the text and data mining (TDM) exception found in Article 4 of Directive 2019/790 is presented, with a primary focus on the opt-out mechanism in connection with the three-step test. Moving to the second part, the analysis delves into the complexities of voice cloning, highlighting the absence of a comprehensive European Union regime for image rights.

    • By addressing these issues, this contribution unveiled two crucial points. First, AI models trained on various artists’ works to create and spread deepfake covers not only violate copyright but also reveal shortcomings in the TDM exception. Second, while the multifaceted image right regime may not be as exhaustive as necessary, it proves to be a viable solution against voice cloning with anticipated advancements in the future.

  • 219.
    Barkman, Richard Dan William
    Karlstad University, Faculty of Health, Science and Technology (starting 2013).
    Object Tracking Achieved by Implementing Predictive Methods with Static Object Detectors Trained on the Single Shot Detector Inception V2 Network2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this work, the possibility of realising object tracking by implementing predictive methods with static object detectors is explored. The static object detectors are obtained as models trained on a machine learning algorithm, or in other words, a deep neural network. Specifically, it is the single shot detector inception v2 network that will be used to train such models. Predictive methods will be incorporated to the end of improving the obtained models’ precision, i.e. their performance with respect to accuracy. Namely, Lagrangian mechanics will be employed to derived equations of motion for three different scenarios in which the object is to be tracked. These equations of motion will be implemented as predictive methods by discretising and combining them with four different iterative formulae.

    In ch. 1, the fundamentals of supervised machine learning, neural networks, convolutional neural networks as well as the workings of the single shot detector algorithm, approaches to hyperparameter optimisation and other relevant theory is established. This includes derivations of the relevant equations of motion and the iterative formulae with which they were implemented. In ch. 2, the experimental set-up that was utilised during data collection, and the manner by which the acquired data was used to produce training, validation and test datasets is described. This is followed by a description of how the approach of random search was used to train 64 models on 300×300 datasets, and 32 models on 512×512 datasets. Consecutively, these models are evaluated based on their performance with respect to camera-to-object distance and object velocity. In ch. 3, the trained models were verified to possess multi-scale detection capabilities, as is characteristic of models trained on the single shot detector network. While the former is found to be true irrespective of the resolution-setting of the dataset that the model has been trained on, it is found that the performance with respect to varying object velocity is significantly more consistent for the lower resolution models as they operate at a higher detection rate.

    Ch. 3 continues with that the implemented predictive methods are evaluated. This is done by comparing the resulting deviations when they are let to predict the missing data points from a collected detection pattern, with varying sampling percentages. It is found that the best predictive methods are those that make use of the least amount of previous data points. This followed from that the data upon which evaluations were made contained an unreasonable amount of noise, considering that the iterative formulae implemented do not take noise into account. Moreover, the lower resolution models were found to benefit more than those trained on the higher resolution datasets because of the higher detection frequency they can employ.

    In ch. 4, it is argued that the concept of combining predictive methods with static object detectors to the end of obtaining an object tracker is promising. Moreover, the models obtained on the single shot detector network are concluded to be good candidates for such applications. However, the predictive methods studied in this thesis should be replaced with some method that can account for noise, or be extended to be able to account for it. A profound finding is that the single shot detector inception v2 models trained on a low-resolution dataset were found to outperform those trained on a high-resolution dataset in certain regards due to the higher detection rate possible on lower resolution frames. Namely, in performance with respect to object velocity and in that predictive methods performed better on the low-resolution models.

    Download full text (pdf)
    fulltext
  • 220.
    Barman, Sourav
    et al.
    Noakhali Science and Technology University, Noakhali, Bangladesh.
    Biswas, Md Raju
    Noakhali Science and Technology University, Noakhali, Bangladesh.
    Marjan, Sultana
    Noakhali Science and Technology University, Noakhali, Bangladesh.
    Nahar, Nazmun
    Noakhali Science and Technology University, Noakhali, Bangladesh.
    Hossain, Mohammad Shahadat
    University of Chittagong, Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Transfer Learning Based Skin Cancer Classification Using GoogLeNet2023In: Machine Intelligence and Emerging Technologies - First International Conference, MIET 2022, Proceedings, part 1 / [ed] Md. Shahriare Satu; Mohammad Ali Moni; M. Shamim Kaiser; Mohammad Shamsul Arefin; Mohammad Shamsul Arefin, Springer Science and Business Media Deutschland GmbH , 2023, Vol. 1, p. 238-252Conference paper (Refereed)
    Abstract [en]

    Skin cancer has been one of the top three cancers that can be fatal when caused by broken DNA. Damaged DNA causes cells to expand uncontrollably, and the rate of growth is currently increasing rapidly. Some studies have been conducted on the computerized detection of malignancy in skin lesion images. However, due to some problematic aspects such as light reflections from the skin surface, differences in color lighting, and varying forms and sizes of the lesions, analyzing these images is extremely difficult. As a result, evidence-based automatic skin cancer detection can help pathologists improve their accuracy and competency in the early stages of the disease. In this paper, we present a transfer ring strategy based on a convolutional neural network (CNN) model for accurately classifying various types of skin lesions. Preprocessing normalizes the input photos for accurate classification; data augmentation increases the amount of images, which enhances classification rate accuracy. The performance of the GoogLeNet transfer learning model is compared to that of other transfer learning models such as Xpection, InceptionResNetVe, and DenseNet, among others. The model was tested on the ISIC dataset, and we ended up with the highest training and testing accuracy of 91.16% and 89.93%, respectively. When compared to existing transfer learning models, the final results of our proposed GoogLeNet transfer learning model characterize it as more dependable and resilient.

  • 221.
    Barnada, Marc
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Goethe University of Frankfurt, Germany.
    Conrad, Christian
    Goethe University of Frankfurt, Germany.
    Bradler, Henry
    Goethe University of Frankfurt, Germany.
    Ochs, Matthias
    Goethe University of Frankfurt, Germany.
    Mester, Rudolf
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Goethe University of Frankfurt, Germany.
    Estimation of Automotive Pitch, Yaw, and Roll using Enhanced Phase Correlation on Multiple Far-field Windows2015In: 2015 IEEE Intelligent Vehicles Symposium (IV), IEEE , 2015, p. 481-486Conference paper (Refereed)
    Abstract [en]

    The online-estimation of yaw, pitch, and roll of a moving vehicle is an important ingredient for systems which estimate egomotion, and 3D structure of the environment in a moving vehicle from video information. We present an approach to estimate these angular changes from monocular visual data, based on the fact that the motion of far distant points is not dependent on translation, but only on the current rotation of the camera. The presented approach does not require features (corners, edges,...) to be extracted. It allows to estimate in parallel also the illumination changes from frame to frame, and thus allows to largely stabilize the estimation of image correspondences and motion vectors, which are most often central entities needed for computating scene structure, distances, etc. The method is significantly less complex and much faster than a full egomotion computation from features, such as PTAM [6], but it can be used for providing motion priors and reduce search spaces for more complex methods which perform a complete analysis of egomotion and dynamic 3D structure of the scene in which a vehicle moves.

  • 222.
    Barnden, L
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Kwiatek, R
    Lau, Y
    Hutton, B
    Thurfjell, L
    Pile, K
    Rowe, C
    Validation of fully automatic brain SPET to MR co-registration2000In: EUROPEAN JOURNAL OF NUCLEAR MEDICINE, ISSN 0340-6997, Vol. 27, no 2, p. 147-154Article in journal (Refereed)
    Abstract [en]

    Fully automatic co-registration of functional to anatomical brain images using information intrinsic to the scans has been validated in a clinical setting for positron emission tomography (PET), but not for single-photon emission tomography (SPET). In thi

  • 223. Baroffio, L.
    et al.
    Cesana, M.
    Redondi, A.
    Tagliasacchi, M.
    Ascenso, J.
    Monteiro, P.
    Eriksson, Emil
    KTH, School of Electrical Engineering (EES), Communication Networks.
    Dan, G.
    Fodor, Viktoria
    KTH, School of Electrical Engineering (EES), Communication Networks.
    GreenEyes: Networked energy-aware visual analysis2015In: 2015 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2015, IEEE conference proceedings, 2015Conference paper (Refereed)
    Abstract [en]

    The GreenEyes project aims at developing a comprehensive set of new methodologies, practical algorithms and protocols, to empower wireless sensor networks with vision capabilities. The key tenet of this research is that most visual analysis tasks can be carried out based on a succinct representation of the image, which entails both global and local features, while it disregards the underlying pixel-level representation. Specifically, GreenEyes will pursue the following goals: i) energy-constrained extraction of visual features; ii) rate-efficiency modelling and coding of visual feature; iii) networking streams of visual features. This will have a significant impact on several scenarios including, e.g., smart cities and environmental monitoring.

  • 224.
    Barreiro, Anabela
    et al.
    INESC-ID, Portugal.
    Souza, José G. C. de
    Unbabel, Portugal.
    Gatt, Albert
    University of Malta, Malta; Utrecht University, The Netherlands.
    Bhatt, Mehul
    Örebro University, School of Science and Technology.
    Lloret, Elena
    University of Alicante, Spain.
    Erdem, Aykut
    Koç University, Turkey.
    Gkatzia, Dimitra
    Edinburgh Napier University, United Kingdom.
    Moniz, Helena
    University of Lisbon, Portugal; INESC-ID, Portugal .
    Russo, Irene
    National Research Council, Italy.
    Kepler, Fábio N.
    Unbabel, Portugal .
    Calixto, Iacer
    Amsterdam University Medical Centers, The Netherlands.
    Paprzycki, Marcin
    Polish Academy of Sciences, Poland .
    Portet, François
    Grenoble Alpes University, France.
    Augenstein, Isabelle
    University of Copenhagen, Denmark .
    Alhasani, Mirela
    Epoka University, Albania.
    Multi3Generation: Multitask, Multilingual, Multimodal Language Generation2022In: Proceedings of the 23rd Annual Conference of the European Association for Machine Translation, European Association for Machine Translation , 2022, p. 345-346Conference paper (Refereed)
    Abstract [en]

    This paper presents the Multitask, Multilingual, Multimodal Language Generation COST Action – Multi3Generatio(CA18231), an interdisciplinary networof research groups working on different aspects of language generation. This "meta-paper" will serve as reference for citationof the Action in future publications. It presents the objectives, challenges and a the links for the achieved outcomes.

  • 225.
    Barrera Tony, Hast Anders, Bengtsson Ewert
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    A fast all-integer ellipse discretization algorithm2003In: Graphics Programming Methods, 2003, p. 121-131Chapter in book (Refereed)
  • 226.
    Barrera Tony, Hast Anders, Bengtsson Ewert
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    A fast and simple all-integer parametric line2003Chapter in book (Refereed)
  • 227. Barrera, Tony
    et al.
    Hast, Anders
    Creative Media Lab, University of Gävle.
    Bengtsson, Ewert
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    An alternative model for shading of diffuse light for rough materials2008In: Game Programming Gems 7 / [ed] Scott Jacobs, Boston: Charles River Media , 2008, 1, p. 373-380Chapter in book (Other academic)
  • 228.
    Barrera Tony, Hast Anders, Bengtsson Ewert
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Faster Shading by Equal Angle Interpolation of Vectors2004In: IEEE Transactions on Visualization and Computer Graphics, Vol. 10, no 2, p. 217-223Article in journal (Refereed)
  • 229.
    Barrera, Tony
    et al.
    Barrera-Kristiansen AB.
    Hast, Anders
    University of Gävle.
    Bengtsson, Ewert
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Minimal Acceleration Hermite Curves2005In: Game Programming Gems 5, Charles River Media, Hingham, Massachusetts , 2005, p. 225-231Chapter in book (Refereed)
    Abstract [en]

    This gem shows how a curve with minimal acceleration can be obtained using Hermite splines [Hearn04]. Acceleration is higher in the bends and therefore this type of curve is a minimal bending curve. This type of curve can be useful for subdivision surfaces when it is required that the surface has this property, which assures that the surface is as smooth as possible. A similar approach for Bézier curves and subdivision can be found in [Overveld97]. It could also be very useful for camera movements [Vlachos01] since it allows that both the position and the direction of the camera can be set for the curve. Moreover, we show how several such curves can be connected in order to achieve continuity between the curve segments.

  • 230.
    Barrera Tony, Hast Anders, Bengtsson, Ewert
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Surface Construction with Near Least Square Acceleration based on Vertex Normals on Triangular Meshes2002In: Proceedings from Sigrad 2002, 2002, p. 17-22Conference paper (Other scientific)
  • 231. Barrera, Tony
    et al.
    Hast, Anders
    Creative Media Lab, University of Gävle.
    Bengtsson, Ewert
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Trigonometric splines2008In: Game programming Gems 7 / [ed] Scott Jacobs, Boston: Charles River Media , 2008, 1, p. 191-198Chapter in book (Other (popular science, discussion, etc.))
  • 232.
    Basieva, Irina
    et al.
    Linnaeus University, Faculty of Technology, Department of Mathematics. Russian Acad Sci, Russia.
    Khrennikov, Andrei
    Linnaeus University, Faculty of Technology, Department of Mathematics.
    Quantum(-like) Formalization of Common Knowledge: Binmore-Brandenburger Operator Approach2015In: Quantum Interaction (QI 2014): 8th International Conference, QI 2014, Filzbach, Switzerland, June 30 - July 3, 2014. Revised Selected Papers / [ed] Harald Atmanspacher, Claudia Bergomi , Thomas Filk, Kirsty Kitto, Springer, 2015, Vol. 8951, p. 93-104Conference paper (Refereed)
    Abstract [en]

    We present the detailed account of the quantum(-like) viewpoint to common knowledge. The Binmore-Brandenburger operator approach to the notion of common knowledge is extended to the quantum case. We develop a special quantum(-like) model of common knowledge based on information representations of agents which can be operationally represented by Hermitian operators. For simplicity, we assume that each agent constructs her/his information representation by using just one operator. However, different agents use in general representations based on noncommuting operators, i.e., incompatible representations. The quantum analog of basic system of common knowledge features K1 - K5 is derived.

  • 233. Baudoin, Y.
    et al.
    Doroftei, D.
    De Cubber, G.
    Berrabah, S. A.
    Pinzon, C.
    Warlet, F.
    Gancet, J.
    Motard, E.
    Ilzkovitz, M.
    Nalpantidis, Lazaros
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Gasteratos, Antonios
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    View-finder: Robotics assistance to fire-fighting services and crisis management2009In: Safety, Security & Rescue Robotics (SSRR), 2009 IEEE International Workshop on, 2009, p. 1-6Conference paper (Refereed)
    Abstract [en]

    In the event of an emergency due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground or area can be entered safely by human emergency workers. The objective of the VIEW-FINDER project is to develop robots which have the primary task of gathering data. The robots are equipped with sensors that detect the presence of chemicals and, in parallel, image data is collected and forwarded to an advanced Control station (COC). The robots will be equipped with a wide array of chemical sensors, on-board cameras, Laser and other sensors to enhance scene understanding and reconstruction. At the Base Station (BS) the data is processed and combined with geographical information originating from a web of sources; thus providing the personnel leading the operation with in-situ processed data that can improve decision making. This paper will focus on the Crisis Management Information System that has been developed for improving a Disaster Management Action Plan and for linking the Control Station with a out-site Crisis Management Centre, and on the software tools implemented on the mobile robot gathering data in the outdoor area of the crisis.

  • 234.
    Baudrier, Etienne
    et al.
    LSIIT, Illkirch, France.
    Busson, Sébastien
    CESR, Tours, France.
    Corsini, Silvio
    BCU, Lausanne, Switzerland.
    Delalandre, Mathieu
    CVC, Barcelona, Spain.
    Landré, Jérôme
    CReSTIC, Troyes, France.
    Morain-Nicolier, Frédéric
    CReSTIC, Troyes, France.
    Retrieval of the ornaments from the Hand-Press Period: An overview2009In: Proceedings of the International Conference on Document Analysis and Recognition, ICDAR, IEEE, 2009, p. 496-500Conference paper (Refereed)
    Abstract [en]

    This paper deals with the topic of the retrieval of document images focused on a specific application: the ornaments of the Hand-Press period. It presents an overview as a result of the work and the discussions undertaken by a workgroup on this subject. The paper starts by giving a general view about digital libraries of ornaments and associated retrieval problematics. Two main issues are underlined: content based image retrieval (CBIR) and image difference visualization. Several contributions are summarized, commented and compared. Conclusions and open problems arising from this overview are twofold: 1. contributions on CBIR miss scale-invariant methods and don't provide significative evaluation results. 2. robust registration is the open problem for visual comparison.

  • 235.
    Bauer, Stefan
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Redmond, Stephen J.
    University College Dublin, University College Dublin.
    et al.,
    Real Robot Challenge: A Robotics Competition in the Cloud2022In: Proceedings of the NeurIPS 2021 Competitions and Demonstrations Track, ML Research Press , 2022, p. 190-204Conference paper (Refereed)
    Abstract [en]

    Dexterous manipulation remains an open problem in robotics. To coordinate efforts of the research community towards tackling this problem, we propose a shared benchmark. We designed and built robotic platforms that are hosted at the MPI-IS1 and can be accessed remotely. Each platform consists of three robotic fingers that are capable of dexterous object manipulation. Users are able to control the platforms remotely by submitting code that is executed automatically, akin to a computational cluster. Using this setup, i) we host robotics competitions, where teams from anywhere in the world access our platforms to tackle challenging tasks ii) we publish the datasets collected during these competitions (consisting of hundreds of robot hours), and iii) we give researchers access to these platforms for their own projects.

  • 236.
    Bax, Gerhard
    Uppsala University, Teknisk-naturvetenskapliga vetenskapsområdet, Earth Sciences, Department of Earth Sciences. Uppsala University, Teknisk-naturvetenskapliga vetenskapsområdet, Earth Sciences, Department of Earth Sciences, Environment and Landscape Dynamics. ELD.
    Remote sensing and 3D visualization of geological structures in mountain ranges:: examples from the Northern Scandinavian Caledonides and the south Tibetan Himalayas2004In: The 26th Nordic Geological Winter Meeting: Abstract volume, 2004, p. 105-Conference paper (Refereed)
  • 237.
    Bax, Gerhard
    et al.
    Uppsala University, Teknisk-naturvetenskapliga vetenskapsområdet, Earth Sciences, Department of Earth Sciences. Uppsala University, Teknisk-naturvetenskapliga vetenskapsområdet, Earth Sciences, Department of Earth Sciences. Uppsala University, Teknisk-naturvetenskapliga vetenskapsområdet, Earth Sciences, Department of Earth Sciences, Environment and Landscape Dynamics. ELD.
    Buchroithner, ManfredDepartment of Cartography.
    Proceedings of the 5th International Symposium of the use of Remote Sensing in Maountain Cartography: High-Mountain Remote Sensing Cartography 19982002Conference proceedings (editor) (Refereed)
  • 238.
    Bazzana, Barbara
    et al.
    Department of Computer, Control, and Management Engineering “Antonio Ruberti”, Sapienza University of Rome, Rome, Italy .
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Grisetti, Giorgio
    Department of Computer, Control, and Management Engineering “Antonio Ruberti”, Sapienza University of Rome, Rome, Italy .
    How-to Augmented Lagrangian on Factor Graphs2024In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 9, no 3, p. 2806-2813Article in journal (Refereed)
    Abstract [en]

    Factor graphs are a very powerful graphical representation, used to model many problems in robotics. They are widely spread in the areas of Simultaneous Localization and Mapping (SLAM), computer vision, and localization. However, the physics of many real-world problems is better modeled through constraints, e.g., estimation in the presence of inconsistent measurements, or optimal control. Constraints handling is hard because the solution cannot be found by following the gradient descent direction as done by traditional factor graph solvers. The core idea of our method is to encapsulate the Augmented Lagrangian (AL) method in factors that can be integrated straightforwardly in existing factor graph solvers. Besides being a tool to unify different robotics areas, the modularity of factor graphs allows to easily combine multiple objectives and effectively exploiting the problem structure for efficiency. We show the generality of our approach by addressing three applications, arising from different areas: pose estimation, rotation synchronization and Model Predictive Control (MPC) of a pseudo-omnidirectional platform. We implemented our approach using C++ and ROS. Application results show that we can favorably compare against domain specific approaches.

  • 239.
    Becher, Marina
    et al.
    Umeå University, Faculty of Science and Technology, Department of Ecology and Environmental Sciences.
    Börlin, Niclas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Klaminder, Jonatan
    Umeå University, Faculty of Science and Technology, Department of Ecology and Environmental Sciences.
    Measuring soil motion with terrestrial close range photogrammetry in periglacial environments2014In: EUCOP 4: Book of Abstracts / [ed] Gonçalo Vieira, Pedro Pina, Carla Mora and António Correia, University of Lisbon and the University of Évora , 2014, p. 351-351Conference paper (Other academic)
    Abstract [en]

    Cryoturbation plays an important role in the carbon cycle as it redistributes carbon deeper down in the soil where the cold temperature prevents microbial decomposition. This contribution is also included in recent models describing the long-term build up of carbon stocks in artic soils. Soil motion rate in cryoturbated soils is sparsely studied. This is because the internal factors maintaining cryoturbation will be affected by any excavation, making it impossible to remove soil samples or install pegs without changing the structure of the soil. So far, mainly the motion of soil surface markers on patterned ground has been used to infer lateral soil motion rates. However, such methods constrain the investigated area to a predetermined distribution of surface markers that may result in a loss of information regarding soil motion in other parts of the patterned ground surface.

    We present a novel method based on terrestrial close range (<5m) photogrammetry to calculate lateral and vertical soil motion across entire small-scale periglacial features, such as non-sorted circles (frost boils). Images were acquired by a 5-camera calibrated rig from at least 8 directions around a non-sorted circle. During acquisition, the rig was carried by one person in a backpack-like portable camera support system. Natural feature points were detected by SIFT and matched between images using the known epipolar geometry of the calibrated rig. The 3D coordinates of points matched between at least 3 images were calculated to create a point cloud of the surface of interest. The procedure was repeated during two consecutive years to be able to measure any net displacement of soil and calculate rates of soil motion. The technique was also applied to a peat palsa where multiple exposures where acquired of selected areas.

    The method has the potential to quantify areas of disturbance and estimate lateral and vertical soil motion in non-sorted circles. Furthermore, it should be possible to quantify peat erosion and rates of desiccation crack formations in peat palsas. This tool could provide new information about cryoturbation rates that could improve existing soil carbon models and increase our understanding about how soil carbon stocks will respond to climate change.

  • 240.
    Bechlioulis, Charalampos P.
    et al.
    Natl Tech Univ Athens, Sch Mech Engn, Control Syst Lab, Zografos 15780, Greece..
    Heshmati-alamdari, Shahab
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Karras, George C.
    Natl Tech Univ Athens, Sch Mech Engn, Control Syst Lab, Zografos 15780, Greece..
    Kyriakopoulos, Kostas J.
    Natl Tech Univ Athens, Sch Mech Engn, Control Syst Lab, Zografos 15780, Greece..
    Robust Image-Based Visual Servoing With Prescribed Performance Under Field of View Constraints2019In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 35, no 4, p. 1063-1070Article in journal (Refereed)
    Abstract [en]

    In this paper, we propose a visual servoing scheme that imposes predefined performance specifications on the image feature coordinate errors and satisfies the visibility constraints that inherently arise owing to the camera's limited field of view, despite the inevitable calibration and depth measurement errors. Its efficiency is demonstrated via comparative experimental and simulation studies.

  • 241.
    Behrens, Jan Kristof
    et al.
    Robert Bosch GmbH, Corporate Research, Renningen, Germany.
    Lange, Ralph
    Robert Bosch GmbH, Corporate Research, Renningen, Germany.
    Mansouri, Masoumeh
    Örebro University, School of Science and Technology.
    A Constraint Programming Approach to Simultaneous Task Allocation and Motion Scheduling for Industrial Dual-Arm Manipulation Tasks2019In: 2019 International Conference on Robotics and Automation (ICRA) / [ed] Howard, A; Althoefer, K; Arai, F; Arrichiello, F; Caputo, B; Castellanos, J; Hauser, K; Isler, V Kim, J; Liu, H; Oh, P; Santos, V; Scaramuzza, D; Ude, A; Voyles, R; Yamane, K; Okamura, A, IEEE , 2019, p. 8705-8711Conference paper (Refereed)
    Abstract [en]

    Modern lightweight dual-arm robots bring the physical capabilities to quickly take over tasks at typical industrial workplaces designed for workers. Low setup times - including the instructing/specifying of new tasks - are crucial to stay competitive. We propose a constraint programming approach to simultaneous task allocation and motion scheduling for such industrial manipulation and assembly tasks. Our approach covers the robot as well as connected machines. The key concept are Ordered Visiting Constraints, a descriptive and extensible model to specify such tasks with their spatiotemporal requirements and combinatorial or ordering constraints. Our solver integrates such task models and robot motion models into constraint optimization problems and solves them efficiently using various heuristics to produce makespan-optimized robot programs. For large manipulation tasks with 200 objects, our solver implemented using Google's Operations Research tools requires less than a minute to compute usable plans. The proposed task model is robot-independent and can easily be deployed to other robotic platforms. This portability is validated through several simulation-based experiments.

  • 242.
    Bekiroglu, Yasemin
    et al.
    School of Mechanical Engineering, University of Birmingham, Birmingham, UK.
    Damianou, Andreas
    Department of Computer Science, University of Sheffield, Sheffield, UK.
    Detry, Renaud
    Centre for Autonomous Systems, CSC, Royal Institute of Technology, Sweden.
    Stork, Johannes Andreas
    Centre for Autonomous Systems, CSC, Royal Institute of Technology, Sweden.
    Kragic, Danica
    Centre for Autonomous Systems, CSC, Royal Institute of Technology, Sweden.
    Ek, Carl Henrik
    Centre for Autonomous Systems, CSC, Royal Institute of Technology, Sweden.
    Probabilistic consolidation of grasp experience2016In: 2016 IEEE International Conference on Robotics and Automation (ICRA), IEEE conference proceedings, 2016, p. 193-200Conference paper (Refereed)
    Abstract [en]

    We present a probabilistic model for joint representation of several sensory modalities and action parameters in a robotic grasping scenario. Our non-linear probabilistic latent variable model encodes relationships between grasp-related parameters, learns the importance of features, and expresses confidence in estimates. The model learns associations between stable and unstable grasps that it experiences during an exploration phase. We demonstrate the applicability of the model for estimating grasp stability, correcting grasps, identifying objects based on tactile imprints and predicting tactile imprints from object-relative gripper poses. We performed experiments on a real platform with both known and novel objects, i.e., objects the robot trained with, and previously unseen objects. Grasp correction had a 75% success rate on known objects, and 73% on new objects. We compared our model to a traditional regression model that succeeded in correcting grasps in only 38% of cases.

  • 243.
    Bekiroglu, Yasemin
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Detry, Renaud
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Grasp Stability from Vision and Touch2012Conference paper (Refereed)
  • 244.
    Belyaev, Evgeny
    et al.
    ITMO Univ, Russia.
    Codreanu, Marian
    Linköping University, Department of Science and Technology, Communications and Transport Systems. Linköping University, Faculty of Science & Engineering.
    Juntti, Markku
    Oulu Univ, Finland.
    Egiazarian, Karen
    Tampere Univ, Finland.
    Compressive sensed video recovery via iterative thresholding with random transforms2020In: IET Image Processing, ISSN 1751-9659, E-ISSN 1751-9667, Vol. 14, no 6, p. 1187-1200Article in journal (Refereed)
    Abstract [en]

    The authors consider the problem of compressive sensed video recovery via iterative thresholding algorithm. Traditionally, it is assumed that some fixed sparsifying transform is applied at each iteration of the algorithm. In order to improve the recovery performance, at each iteration the thresholding could be applied for different transforms in order to obtain several estimates for each pixel. Then the resulting pixel value is computed based on obtained estimates using simple averaging. However, calculation of the estimates leads to significant increase in reconstruction complexity. Therefore, the authors propose a heuristic approach, where at each iteration only one transform is randomly selected from some set of transforms. First, they present simple examples, when block-based 2D discrete cosine transform is used as the sparsifying transform, and show that the random selection of the block size at each iteration significantly outperforms the case when fixed block size is used. Second, building on these simple examples, they apply the proposed approach when video block-matching and 3D filtering (VBM3D) is used for the thresholding and show that the random transform selection within VBM3D allows to improve the recovery performance as compared with the recovery based on VBM3D with fixed transform.

  • 245.
    Benderius, Björn
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Laser Triangulation Using Spacetime Analysis2007Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis spacetime analysis is applied to laser triangulation in an attempt to eliminate certain artifacts caused mainly by reflectance variations of the surface being measured. It is shown that spacetime analysis do eliminate these artifacts almost completely, it is also shown that the shape of the laser beam used no longer is critical thanks to the spacetime analysis, and that in some cases the laser probably even could be exchanged for a non-coherent light source. Furthermore experiments of running the derived algorithm on a GPU (Graphics Processing Unit) are conducted with very promising results.

    The thesis starts by deriving the theory needed for doing spacetime analysis in a laser triangulation setup taking perspective distortions into account, then several experiments evaluating the method is conducted.

    Download full text (pdf)
    FULLTEXT01
  • 246.
    Bengtsson, Björn
    University of Skövde, School of Informatics.
    Dynamisk Kollisionsundvikande I Twin Stick shooter: Hastighetshinder och partikelseparation2019Independent thesis Basic level (university diploma), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

    I examensarbetet jämförs undvikande av kollision och tidsefektivitet mellan det två metoderna hastighetshinder och partikelseparation i spelgenren Twin stick shooter. Arbetet försöker besvara frågan: Hur skiljer sig undvikandet av kollision och tidseffektiviteten mellan metoderna hastighetshinder och partikelseparation, i spelgenren twin stick shooter med flockbeteende? För att besvara frågan har en artefakt skapats. I artefakten jagar agenter en spelare medan agenterna undviker kollision med andra agenter, dock eftersträvar agenterna att kollidera med spelaren. I artefakten körs olika experiment baserat på parametrar som har ställts in. Varje experiment körs en bestämd tid och all data om kollisioner och exekveringstid för respektive metod sparas i en textfil.   Resultatet av experimenten pekar på att partikelseparation lämpar sig bättre för twin stick shooters.  Hastighetshinder kolliderar mindre men tidsberäkningen är för hög och skalar dåligt med antal agenter. Det passar inte twinstick shooter då det oftast är många agenter på skärmen.  Metoderna för undvikandet av kollision har användning till radiostyrda billar och robotar, samt simulation av folkmassa.

    Download full text (pdf)
    fulltext
  • 247.
    Bengtsson, E.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    The technical development in the ICT-field2000In: IT at school between vision and practice - a research overview, 2000, p. 39-55Chapter in book (Other scientific)
  • 248.
    Bengtsson, Ewert
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Analysis of 3D images of molecules, cells, tissues and organs2007In: Medicinteknikdagarna 2007, 2007, p. 1-Conference paper (Other scientific)
    Abstract [en]

    Our world is three dimensional. With our eyes we mainly see the surfaces of 3D objects and in conventional imaging we see projections of parts of the 3D world down to 2D. But over the last decades new imaging techniques such as tomography and confocal microscopy have evolved that make true 3D volume images available,. These images can reveal information about the inner properties and conditions of objects, e.g. our bodies, that can be of immense value to science and medicine. But to really explore the information in these images we need computer support.

    At the Centre for Image Analysis in Uppsala we are developing methods for the analysis and visualisation of volume images. A nice aspect of image processing methods is that they in most cases are independent of the scale in the images. In this presentation we will give examples of how images of widely different scales can be analysed and visualised.

    - At the highest resolution we have images of protein molecules created by cryo-electron tomography with voxels of a few nanometers.

    - Using confocal microscopy we can also image single molecules, but then only seeing them as bright spots that need to be localized at micrometer scales in the cells.

    - The cells build up tissue and using conventional pathology stains or micro CT we can image the tissue in 2D and 3D. We are using such images to develop methods for studying tissue integration of implants.

    - Finally conventional X-ray tomography and magnetic resonance tomography provide images on the organ level with voxels in the millimetre range. We are developing methods for liver segmentation in CT data and visualising the contrast uptake over time in MR angiography images of breasts.

  • 249.
    Bengtsson, Ewert
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Computerized Cell Image Analysis: Past, Present and Future2003Conference paper (Refereed)
  • 250.
    Bengtsson, Ewert
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Computerized Cell Image Processing in Healthcare2005In: Proceedings of Healthcomm2005, 2005, p. 11-17Conference paper (Refereed)
    Abstract [en]

    The visual interpretation of images is at the core of most medical diagnostic procedures and the final decision for many diseases, including cancer, is based on microscopic examination of cells and tissues. Through screening of cell samples the incidence and mortality of cervical cancer have been reduced significantly. The visual interpretation is, however, tedious and in many cases error-prone. Therefore many attempts have been made at using the computer to supplement or replace the human visual inspection by computer analysis and to automate some of the more tedious visual screening tasks. The computers and computer networks have also been used to manage, store, transmit and display images of cells and tissues making it possible to visually analyze cells from remote locations. In this presentation these developments are traced from their very beginning through the present situation and into the future.

2345678 201 - 250 of 3114
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf