Skip to content

Latest commit

 

History

History
2 lines (2 loc) · 1.77 KB

README.md

File metadata and controls

2 lines (2 loc) · 1.77 KB

FIle-Compression

We will use the text file compression using the Huffman Algorithm. Huffman Algorithm is an algorithm used for doing data compression and it forms the basic idea behind file compression. The most frequent data encoded with lower bits. It is compression of data depending upon the frequency of occurrence without losing the data. This post talks about fixed-length and variable-length encoding, uniquely decodable codes, prefix rules and construction of Huffman Tree. There is too much usage of text file compression in the real world. We can use this in the file sending and transferring whenever we need it. The main idea based upon a binary tree called a Huffman tree which helps to generate the bits string. For every word in a text, Huffman generates a unique number of bit. Huffman creates a binary tree that has two values 0 and 1. We make a priority queue on the basis of frequency for sorting bytes. Each node of the tree is presented with a byte symbol. File compression is used to reduce the file size of one or more files. When a file or a group of files is compressed, the resulting "archive" often takes up 50% to 90% less disk space than the original file(s). Common types of file compression include Zip, RAR, and 7z compression. Each one of these compression methods uses a unique algorithm to compress the data. Compressing files on your computer save disk space on both removable and non-removable drives. The compression process reduces the overall size of a computer file by physically removing data from the file that is repeated or empty. The process then places a flag indicating the removal. The flag takes up less space. That’s why we are making this software for the purpose of ease and for solving the memory issues.There is complete code huffman encoding and decoding.