# lsvih

0%

Sahlgren M. An Introduction to Random Indexing[C]// Methods & Applications of Semantic Indexing Workshop at International Conference on Terminology & Knowledge Engineering. 2005:194–201.

## 论文概述

• The word space methodology
• Problems and solutions
• Random Indexing
• Results

## Random Indexing

• First, each context (e.g. each document or each word) in the data is assigned a unique and randomly generated representation called an index vector. These index vectors are sparse, high-dimensional, and ternary, which means that their dimensionality (d) is on the order of thousands, and that they consist of a small number of randomly distributed +1s and -1s, with the rest of the elements of the vectors set to 0.
• Then, context vectors are produced by scanning through the text, and each time a word occurs in a context (e.g. in a document, or within a sliding context window), that context’s d-dimensional index vector is added to the context vector for the word in question. Words are thus represented by d-dimensional context vectors that are effectively the sum of the words’ contexts.

### 第二步：生成文本向量

#### 生成特征词汇的上下文向量

$\omega f(\omega_{j+k})$ 为特征词 $\omega_j$ 在窗口范围上下文中共现特征词 $\omega_{j+k}$ 在文本 $d_i$ 中的加权权重。论文 2 中采用了 tf-idf 加权计算算法。论文 2 此时引用了

Gorman J, Curran J R. Random Indexing using Statistical Weight Functions[C]// EMNLP 2007, Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, 22-23 July 2006, Sydney, Australia. DBLP, 2006:457-464.

#### 生成文本向量

• 计算文本集中所有特征词汇上下文向量的平均值

• 生成文档 $d_i$ 的文本向量

## Result

• 计算量小
• 容易实现
• 处理效率高
• 潜在语义表现好，利用了上下文信息表示特征词的词向量，容易解决同义词、近义词等问题
• 降维性能好