ASCII and Unicode are two character encodings. Basically, they are standards on how to represent difference characters in binary so that they can be written, stored, transmitted, and read in digital media. The main difference between the two is in the way they encode the character and the number of bits that they use for each. ASCII originally used seven bits to encode each character. This was later increased to eight with Extended ASCII to address the apparent inadequacy of the original. In contrast, Unicode uses a variable bit encoding program where you can choose between 32, 16, and 8-bit encodings. Using more bits lets you use more characters at the expense of larger files while fewer bits give you a limited choice but you save a lot of space. Using fewer bits (i.e. UTF-8 or ASCII) would probably be best if you are encoding a large document in English.
One of the main reasons why Unicode was the problem arose from the many non-standard extended ASCII programs. Unless you are using the prevalent page, which is used by Microsoft and most other software companies, then you are likely to encounter problems with your characters appearing as boxes. Unicode virtually eliminates this problem as all the character code points were standardized.
Another major advantage of Unicode is that at its maximum it can accommodate a huge number of characters. Because of this, Unicode currently contains most written languages and still has room for even more. This includes typical left-to-right scripts like English and even right-to-left scripts like Arabic. Chinese, Japanese, and the many other variants are also represented within Unicode. So Unicode won’t be replaced anytime soon.
In order to maintain compatibility with the older ASCII, which was already in widespread use at the time, Unicode was designed in such a way that the first eight bits matched that of the most popular ASCII page. So if you open an ASCII encoded file with Unicode, you still get the correct characters encoded in the file. This facilitated the adoption of Unicode as it lessened the impact of adopting a new encoding standard for those who were already using ASCII.
1.ASCII uses an 8-bit encoding while Unicode uses a variable bit encoding.
2.Unicode is standardized while ASCII isn’t.
3.Unicode represents most written languages in the world while ASCII does not.
4.ASCII has its equivalent within Unicode.
unicode is the superset of ASCII..As ASCII only has 256 characters with very few symbols...But unicode on other gives a freedom of writting varies characters not only including English alphabet but including most of other languages in world.unicode can write upto 65,536 characters and symbols...Because Unicode characters don't generally fit into one 8-bit byte,there are numerous ways of storing Unicode characters in byte sequences, such as UTF-32 , UTF- 16 and UTF-8.UTF-8 has an advantage whereASCII are most prevalentcharacters. In that case mostcharacters only occupy one byteeach. It is also advantageous thatUTF-8 file containing only ASCIIcharacters has the same encodingas an ASCII file.UTF-16 is better where ASCII is notpredominant, it uses 2 bytes percharacter primarily. UTF-8 will startto use 3 or more bytes for the
higher order characters whereUTF-16 remains at just 2 most ofthe time.