Digitala Vetenskapliga Arkivet

Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Context-aware human-robot collaboration in assembly
KTH, School of Industrial Engineering and Management (ITM), Production Engineering.ORCID iD: 0000-0001-9618-8826
2020 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The PhD study is aiming to increase the accuracy and efficiency of human-robot collaborative (HRC) assembly systems. To achieve this goal, four main directions are investigated in this research. The first direction is HRC assembly context recognition, which focuses on the identification and recognition of relevant assembly context in the assembly environment. Valuable knowledge can be captured through the assembly context to increase assembly efficiency. The definition of assembly context is given, and recognition algorithms are designed. The second direction is multimodal robot control. Instead of coding, the possibility to control robots with multiple modalities is explored. The algorithm to increase the recognition accuracy of multimodal robot control is developed. The third direction is human motion prediction. Robots can be supported to anticipate and prepare for the human operator' next move with an accurate and timely prediction of the human operator's motion. Two different approaches are explored to predict human motions during the assembly operation. The efficiency of HRC assembly systems can be further boosted. The last direction of the study is remote HRC. A special scenario of HRC is explored where a human operator collaborates with a robot remotely. The scenario is investigated, a possible solution is also provided. Along with the four directions, key algorithms, system designs, and experiments are analysed. Furthermore, the advantages, drawbacks, and future directions of the approaches are given.

Abstract [sv]

Doktorand Studien syfte är att öka noggrannheten och effektiviteten i human-robot collaborative (HRC) monteringssystem. För att uppnå detta mål, fyra huvudinriktningar undersöks i denna forskning. Den första riktningen är HRC monterings kontext igenkänning, som fokuserar på identifiering och igenkänning av relevant monterings kontext i monterings miljö. Värdefull information kan fångas genom monterings-kontext för att öka monterings-effektiviteten. Definition av monterings kontext anges och igenkännings-algoritmer utformas. Den andra riktningen är multimodal robotstyrning. Istället för kodning undersöks möjligheten att kontrollera robotar med flera modaliteter. Algoritmen för att öka igenkänningen noggrannheten för multimodal robotstyrning utvecklas. Den tredje riktningen är mänsklig rörelse förutsägelse. Rbotar kan stöttas för att förutse och förbereda för de mänskliga operatörernas nästa drag med en exakt och snabb förutsägelse av mänskliga operatörens rörelse. Två olika tillvägagångssätt utforskas för att förutsäga mänskliga rörelser under montering operationen. Effektiviteten hos HRC-monteringssystem kan förbättras ytterligare. Den sista riktningen för studien är fjärr HRC. Ett speciellt scenario med HRC utforskas där en mänsklig operatör samarbetar med en robot på distans. Scenariot undersöks, en möjlig lösning tillhandahålls också. Tillsammans med de fyra riktningarna analyseras nyckel algoritmer, systemdesign och experiment. Dessutom ges fördelarna, nackdelarna och framtida riktningarna för tillvägagångssätten.

 

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2020. , p. 81
Series
TRITA-ITM-AVL ; 2020:34
National Category
Production Engineering, Human Work Science and Ergonomics
Research subject
Production Engineering
Identifiers
URN: urn:nbn:se:kth:diva-279471ISBN: 978-91-7873-594-5 (print)OAI: oai:DiVA.org:kth-279471DiVA, id: diva2:1460103
Public defence
2020-09-11, https://kth-se.zoom.us/j/67928511851, Stockholm, 10:00 (English)
Opponent
Supervisors
Available from: 2020-08-21 Created: 2020-08-21 Last updated: 2022-06-25Bibliographically approved
List of papers
1. Gesture recognition for human-robot collaboration: A review
Open this publication in new window or tab >>Gesture recognition for human-robot collaboration: A review
2018 (English)In: International Journal of Industrial Ergonomics, ISSN 0169-8141, E-ISSN 1872-8219, Vol. 68, p. 355-367Article, review/survey (Refereed) Published
Abstract [en]

Recently, the concept of human-robot collaboration has raised many research interests. Instead of robots replacing human workers in workplaces, human-robot collaboration allows human workers and robots working together in a shared manufacturing environment. Human-robot collaboration can release human workers from heavy tasks with assistive robots if effective communication channels between humans and robots are established. Although the communication channels between human workers and robots are still limited, gesture recognition has been effectively applied as the interface between humans and computers for long time. Covering some of the most important technologies and algorithms of gesture recognition, this paper is intended to provide an overview of the gesture recognition research and explore the possibility to apply gesture recognition in human-robot collaborative manufacturing. In this paper, an overall model of gesture recognition for human-robot collaboration is also proposed. There are four essential technical components in the model of gesture recognition for human-robot collaboration: sensor technologies, gesture identification, gesture tracking and gesture classification. Reviewed approaches are classified according to the four essential technical components. Statistical analysis is also presented after technical analysis. Towards the end of this paper, future research trends are outlined.

Place, publisher, year, edition, pages
Elsevier, 2018
Keywords
Human-robot collaboration, Gesture, Gesture recognition
National Category
Production Engineering, Human Work Science and Ergonomics
Identifiers
urn:nbn:se:kth:diva-270824 (URN)10.1016/j.ergon.2017.02.004 (DOI)000452343300038 ()2-s2.0-85014161933 (Scopus ID)
Note

QC 20200313

Available from: 2020-03-13 Created: 2020-03-13 Last updated: 2022-06-26Bibliographically approved
2. Human motion prediction for human-robot collaboration
Open this publication in new window or tab >>Human motion prediction for human-robot collaboration
2017 (English)In: Journal of manufacturing systems, ISSN 0278-6125, E-ISSN 1878-6642, Vol. 44, p. 287-294Article in journal (Refereed) Published
Abstract [en]

In human-robot collaborative manufacturing, industrial robots would work alongside human workers who jointly perform the assigned tasks seamlessly. A human-robot collaborative manufacturing system is more customised and flexible than conventional manufacturing systems. In the area of assembly, a practical human-robot collaborative assembly system should be able to predict a human worker's intention and assist human during assembly operations. In response to the requirement, this research proposes a new human-robot collaborative system design. The primary focus of the paper is to model product assembly tasks as a sequence of human motions. Existing human motion recognition techniques are applied to recognise the human motions. Hidden Markov model is used in the motion sequence to generate a motion transition probability matrix. Based on the result, human motion prediction becomes possible. The predicted human motions are evaluated and applied in task-level human-robot collaborative assembly.

Place, publisher, year, edition, pages
ELSEVIER SCI LTD, 2017
Keywords
Human-robot collaboration, Human motion prediction, Assembly
National Category
Materials Engineering
Identifiers
urn:nbn:se:kth:diva-215844 (URN)10.1016/j.jmsy.2017.04.009 (DOI)000411772300004 ()2-s2.0-85018894393 (Scopus ID)
Conference
45th SME North American Manufacturing Research Conference (NAMRC), JUN 04-08, 2017, Univ So Calif, Los Angeles, CA
Note

QC 20171017

Available from: 2017-10-17 Created: 2017-10-17 Last updated: 2024-03-18Bibliographically approved
3. Deep learning-based human motion recognition for predictive context-aware human-robot collaboration
Open this publication in new window or tab >>Deep learning-based human motion recognition for predictive context-aware human-robot collaboration
2018 (English)In: CIRP annals, ISSN 0007-8506, E-ISSN 1726-0604, Vol. 67, no 1, p. 17-20Article in journal (Refereed) Published
Abstract [en]

Timely context awareness is key to improving operation efficiency and safety in human-robot collaboration (HRC) for intelligent manufacturing. Visual observation of human workers' motion provides informative clues about the specific tasks to be performed, thus can be explored for establishing accurate and reliable context awareness. Towards this goal, this paper investigates deep learning as a data driven technique for continuous human motion analysis and future HRC needs prediction, leading to improved robot planning and control in accomplishing a shared task. A case study in engine assembly is carried out to validate the feasibility of the proposed method.

Place, publisher, year, edition, pages
Elsevier, 2018
Keywords
Motion, Predictive model, Machine learning
National Category
Production Engineering, Human Work Science and Ergonomics
Identifiers
urn:nbn:se:kth:diva-232639 (URN)10.1016/j.cirp.2018.04.066 (DOI)000438470400005 ()2-s2.0-85046730213 (Scopus ID)
Note

QC 20180801

Available from: 2018-08-01 Created: 2018-08-01 Last updated: 2024-03-18Bibliographically approved
4. Towards Robust Human-Robot Collaborative Manufacturing: Multimodal Fusion
Open this publication in new window or tab >>Towards Robust Human-Robot Collaborative Manufacturing: Multimodal Fusion
2018 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 6, p. 74762-74771Article in journal (Refereed) Published
Abstract [en]

Intuitive and robust multimodal robot control is the key toward human-robot collaboration (HRC) for manufacturing systems. Multimodal robot control methods were introduced in previous studies. The methods allow human operators to control robot intuitively without programming brand-specific code. However, most of the multimodal robot control methods are unreliable because the feature representations are not shared across multiple modalities. To target this problem, a deep learning-based multimodal fusion architecture is proposed in this paper for robust multimodal HRC manufacturing systems. The proposed architecture consists of three modalities: speech command, hand motion, and body motion. Three unimodal models are first trained to extract features, which are further fused for representation sharing. Experiments show that the proposed multimodal fusion model outperforms the three unimodal models. This paper indicates a great potential to apply the proposed multimodal fusion architecture to robust HRC manufacturing systems.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2018
Keywords
Deep learning, human-robot collaboration, multimodal fusion, intelligent manufacturing systems
National Category
Production Engineering, Human Work Science and Ergonomics
Identifiers
urn:nbn:se:kth:diva-241236 (URN)10.1109/ACCESS.2018.2884793 (DOI)000454277700001 ()2-s2.0-85058114368 (Scopus ID)
Note

QC 20190116

Available from: 2019-01-16 Created: 2019-01-16 Last updated: 2022-06-26Bibliographically approved
5. Remote human–robot collaboration: A cyber–physical system application for hazard manufacturing environment
Open this publication in new window or tab >>Remote human–robot collaboration: A cyber–physical system application for hazard manufacturing environment
2020 (English)In: Journal of manufacturing systems, ISSN 0278-6125, E-ISSN 1878-6642, Vol. 54, p. 24-34Article in journal (Refereed) Published
Abstract [en]

Collaborative robot's lead-through is a key feature towards human–robot collaborative manufacturing. The lead-through feature can release human operators from debugging complex robot control codes. In a hazard manufacturing environment, human operators are not allowed to enter, but the lead-through feature is still desired in many circumstances. To target the problem, the authors introduce a remote human–robot collaboration system that follows the concept of cyber–physical systems. The introduced system can flexibly work in four different modes according to different scenarios. With the utilisation of a collaborative robot and an industrial robot, a remote robot control system and a model-driven display system is designed. The designed system is also implemented and tested in different scenarios. The final analysis indicates a great potential to adopt the developed system in hazard manufacturing environment.

Place, publisher, year, edition, pages
Elsevier, 2020
Keywords
Cyber–physical systems, Human–robot collaboration, Manufacturing
National Category
Mechanical Engineering
Identifiers
urn:nbn:se:kth:diva-267862 (URN)10.1016/j.jmsy.2019.11.001 (DOI)000521511500003 ()2-s2.0-85075565506 (Scopus ID)
Note

QC 20200219

Available from: 2020-02-19 Created: 2020-02-19 Last updated: 2022-06-26Bibliographically approved
6. Deep learning-based human motion recognition for predictive context-aware human-robot collaboration
Open this publication in new window or tab >>Deep learning-based human motion recognition for predictive context-aware human-robot collaboration
2018 (English)In: CIRP annals, ISSN 0007-8506, E-ISSN 1726-0604, Vol. 67, no 1, p. 17-20Article in journal (Refereed) Published
Abstract [en]

Timely context awareness is key to improving operation efficiency and safety in human-robot collaboration (HRC) for intelligent manufacturing. Visual observation of human workers' motion provides informative clues about the specific tasks to be performed, thus can be explored for establishing accurate and reliable context awareness. Towards this goal, this paper investigates deep learning as a data driven technique for continuous human motion analysis and future HRC needs prediction, leading to improved robot planning and control in accomplishing a shared task. A case study in engine assembly is carried out to validate the feasibility of the proposed method.

Place, publisher, year, edition, pages
Elsevier, 2018
Keywords
Motion, Predictive model, Machine learning
National Category
Production Engineering, Human Work Science and Ergonomics
Identifiers
urn:nbn:se:kth:diva-232639 (URN)10.1016/j.cirp.2018.04.066 (DOI)000438470400005 ()2-s2.0-85046730213 (Scopus ID)
Note

QC 20180801

Available from: 2018-08-01 Created: 2018-08-01 Last updated: 2024-03-18Bibliographically approved

Open Access in DiVA

fulltext(3124 kB)392 downloads
File information
File name FULLTEXT01.pdfFile size 3124 kBChecksum SHA-512
7ce93cfdf6b17d3b74c4543cdd67463805c0ccb7482a4688523429995097fae7cd560aa562a93501938785c6b04a55660c131bb63499e26c737cfcaf74e302e9
Type fulltextMimetype application/pdf

Other links

http://Vid fysisk närvaro eller Du som saknar dator/ datorvana kan kontakta service@itm.kth.se (English)

Search in DiVA

By author/editor
Liu, Hongyi
By organisation
Production Engineering
Production Engineering, Human Work Science and Ergonomics

Search outside of DiVA

GoogleGoogle Scholar
Total: 392 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1008 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf