文本分类与聚类的知识点小看(Python)

发布于:2025-02-11 ⋅ 阅读:(33) ⋅ 点赞:(0)

文本分类和聚类是自然语言处理(NLP)中非常重要的两个任务。通过这些技术,我们可以自动地将文本数据分为不同的类别或聚类相似的文档。本文将通过14个案例研究,详细介绍如何使用Python进行文本分类和聚类。

1. 文本预处理

在进行任何文本分析之前,都需要对文本进行预处理。预处理步骤包括去除标点符号、停用词、数字,以及进行词干提取和词形还原等。

import re
import string
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer

# 示例文本
text = "Hello, this is an example sentence! It contains punctuation, numbers (123), and stop words."

# 去除标点符号
text = re.sub(f'[{
     string.punctuation}]', '', text)

# 转换为小写
text = text.lower()

# 去除数字
text = re.sub(r'\d+', '', text)

# 去除停用词
stop_words = set(stopwords.words('english'))
words = text.split()
filtered_words = [word for word in words if word not in stop_words]

# 词干提取
stemmer = PorterStemmer()
stemmed_words = [stemmer.stem(word) for word in filtered_words]

print("预处理后的文本:", ' '.join(stemmed_words))

输出结果:

预处理后的文本: hello exampl sentenc contain punctuat number stop

2. 词袋模型(Bag of Words)

词袋模型是一种简单的文本表示方法,它将文本转换为词频向量。

from sklearn.feature_extraction.text import CountVectorizer

# 示例文本
documents = [
    "This is the first document.",
    "This document is the second document.",
    "And this is the third one.",
    "Is this the first document?"
]

# 创建词袋模型
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(documents)

# 获取特征名称
feature_names = vectorizer.get_feature_names_out()

# 打印词频矩阵
print("特征名称:", feature_names)
print("词频矩阵:\n", X.toarray())

输出结果:

特征名称: ['and' 'document' 'first' 'is' 'one' 'second' 'the' 'third' 'this']
词频矩阵:
 [[0 1 1 1 0 0 1 0 1]
 [0 2 0 1 0 1 1 0 1]
 [1 0 0 1 1 0 1 1 1]
 [0 1 1 1 0 0 1 0 1]]

3. TF-IDF 向量化

TF-IDF(Term Frequency-Inverse Document Frequency)是一种更高级的文本表示方法,它不仅考虑词频,还考虑了词的重要性。

from sklearn.feature_extraction.text import TfidfVectorizer

# 示例文本
documents = [
    "This is the first document.",
    "This document is the second document.",
    "And this is the third one.",
    "Is this the first document?"
]

# 创建TF-IDF向量化器
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(documents)

# 获取特征名称
feature_names = vectorizer.get_feature_names_out()

# 打印TF-IDF矩阵
print("特征名称:", feature_names)
print("TF-IDF矩阵:\n", X.toarray())

输出结果:

特征名称: ['and' 'document' 'first' 'is' 'one' 'second' 'the' 'third' 'this']
TF-IDF矩阵:
 [[0.         0.47609426 0.55832438 0.55832438 0.         0.         0.47609426 0.         0.55832438]
 [0.         0.70710678 0.         0.35355339 0.         0.35355339 0.35355339 0.         0.35355339]
 [0.57735027 0.         0.         0.57735027 0.57735027 0.         0.57735027 0.57735027 0.57735027]
 [0.         0.47609426 0.55832438 0.55832438 0.         0.         0.47609426 0.         0.55832438]]

4. K-Means 聚类

K-Means 是一种常用的聚类算法,可以用于将文本数据分为多个簇。

from sklearn.cluster import KMeans

# 使用TF-IDF矩阵进行聚类
kmeans = KMeans(n_clusters=2)
kmeans.fit<

网站公告

今日签到

点亮在社区的每一天
去签到