- Python Text Processing - Home
- Python Text Processing - Introduction
- Python Text Processing - Environment
- Python Text Processing - String Immutability
- Python Text Processing - Sorting Lines
- Python Text Processing - Counting Token in Paragraphs
- Python Text Processing - Binary ASCII Conversion
- Python Text Processing - Strings as Files
- Python Text Processing - Backward File Reading
- Python Text Processing - Filter Duplicate Words
- Python Text Processing - Extract Emails from Text
- Python Text Processing - Extract URL from Text
- Python Text Processing - Pretty Print
- Python Text Processing - State Machine
- Python Text Processing - Capitalize and Translate
- Python Text Processing - Tokenization
- Python Text Processing - Remove Stopwords
- Python Text Processing - Synonyms and Antonyms
- Python Text Processing - Translation
- Python Text Processing - Word Replacement
- Python Text Processing - Spelling Check
- Python Text Processing - WordNet Interface
- Python Text Processing - Corpora Access
- Python Text Processing - Tagging Words
- Python Text Processing - Chunks and Chinks
- Python Text Processing - Chunk Classification
- Python Text Processing - Classification
- Python Text Processing - Bigrams
- Python Text Processing - Process PDF
- Python Text Processing - Process Word Document
- Python Text Processing - Reading RSS feed
- Python Text Processing - Sentiment Analysis
- Python Text Processing - Search and Match
- Python Text Processing - Text Munging
- Python Text Processing - Text wrapping
- Python Text Processing - Frequency Distribution
- Python Text Processing - Summarization
- Python Text Processing - Stemming Algorithms
- Python Text Processing - Constrained Search
Python Text Processing Useful Resources
Python Text Processing - Filter Duplicate Words
Many times, we have a need of analysing the text only for the unique words present in the file. So, we need to eliminate the duplicate words from the text. This is achieved by using the word tokenization and set functions available in nltk.
Example - Without preserving the order
In the below example we first tokenize the sentence into words. Then we apply set() function which creates an unordered collection of unique elements. The result has unique words which are not ordered.
main.py
from nltk.tokenize import word_tokenize word_data = "The Sky is blue also the ocean is blue also Rainbow has a blue colour." # First Word tokenization nltk_tokens = word_tokenize(word_data) # Applying Set no_order = list(set(nltk_tokens)) print(no_order)
Output
When we run the above program, we get the following output −
['is', 'Rainbow', 'ocean', 'the', 'has', 'The', 'Sky', '.', 'a', 'also', 'colour', 'blue']
Preserving the Order
To get the words after removing the duplicates but still preserving the order of the words in the sentence, we read the words and add it to list by appending it.
main.py
from nltk.tokenize import word_tokenize
word_data = "The Sky is blue also the ocean is blue also Rainbow has a blue colour."
# First Word tokenization
nltk_tokens = word_tokenize(word_data)
ordered_tokens = set()
result = []
for word in nltk_tokens:
if word not in ordered_tokens:
ordered_tokens.add(word)
result.append(word)
print(result)
Output
When we run the above program, we get the following output −
['The', 'Sky', 'is', 'blue', 'also', 'the', 'ocean', 'Rainbow', 'has', 'a', 'colour', '.']
Advertisements