The prospect of artificial consciousness raises theoretical, technical and ethical challenges which converge on the core issue of how to eventually identify and characterize it. In order to provide an answer to this question, I propose to start from a theoretical reflection about the meaning and main characteristics of consciousness. On the basis of this conceptual clarification it is then possible to think about relevant empirical indicators (i.e. features that facilitate the attribution of consciousness to the system considered) and identify key ethical implications that arise. In this chapter, I further elaborate previous work on the topic, presenting a list of candidate indicators of consciousness in artificial systems and introducing an ethical reflection about their potential implications. Specifically, I focus on two main ethical issues: the conditions for considering an artificial system as a moral subject; and the need for a non-anthropocentric approach in reflecting about the science and the ethics of artificial consciousness.