The European Commission's Ethical Guidelines for AI has a great relevance on the field since it is very thorough. Hagerdorff (2020) has evaluated 22 different ethical guidelines for AI. He found that in 80 per cent of the guidelines handle privacy, fairness and accountability as minimal requirements of responsible AI system. He also noted that these matters in addition to robustness and explainability are more easily solved as technical matters than the social issues that might be arising from the development of AI system. He found that the company codes of ethics were the most minimalistic which was also verified in our paper. Hagerdorff also found that the ethical guidelines usually do not commit to larger societal interests. This paper compares EC's ethical guidelines with those of IBM, Google and IEEE for getting a picture how the ethical issues are approached in commercial environment. The applied analysis method is data-driven; the ethical guidelines are examined and the common themes are noted to appear. The EC's ethical guidelines are the basis of the comparison. Noticed common themes of these guidelines are accountability, transparency and explainability, diversity, inclusion and fairness, safety and security, and societal wellbeing and humanity, even though all the themes are not discussed in all guidelines in detail. It seems that the ethical guidelines usually do not commit to larger societal interests because the societal issues and wider effects that AI has on the society are hard to write on the form of the simple guidelines. The discussion on the effects of the artificial intelligence on the societies needs to be addressed to political decision-makers and wider audience of researchers than just the developers of the AI or business organizations that exploit artificial intelligence. There is also need for involving the users and target groups to this discussion.