Siri, Cortana, Google and other applications use human voice to do a variety of things, e.g. searching for information, sending emails, calling somebody. Voice-based technologies are increasingly applied in legal environment and legal services, for example in legal advice rendered online and in legal translations. At the same time, new applications of innovative technologies caused the necessity to define the approach to privacy issues anew. The cases of Edward Snowden and Julian Assange showed us how meaningful privacy and its protection is and made us realize the excessive amount of personal data processed and stored daily. This is why privacy and its protection will soon become one of the most important personal rights. The issue of voice protection comes to the fore in this context. Voice is, obviously, a personal right. What is more, voice is becoming a tool used by most applications both for mundane activities as well as more complex ones, like ROSS AI operating on IBM’s Watson, which can do legal research and is learning to understand law with every research conducted by it. What if it was possible for such applications as Watson to use the voice of a specific lawyer and, with the use of voice sample, produce speech of a different content, for example in the form of legal advice? Well, practically it is possible, since last November Adobe presented Adobe VoCo to the world, which (when having a voice sample) is able to read various content differing from the conent sampled. The present article will try to shed some light to the issue of the risk involved with voice cloning technology in legal environment and will analyse whether law can adequately protect human voice as a personal right.