Define P(w∣a) to be the conditional probability that given word a, the word w is nearby to it (precise: 2 hops). Then, two words are similar if P(w∣a)=P(w∣b) for every word w. That is, their neighbourhood is similar.
PMI(a,b)=log[P(a)P(b)P(a,b)]=log[P(a)P(a∣b)].
Words close in the embedding space are often either synonyms (e.g. happy and delighted), antonyms (e.g. good and evil) or other easily interchangeable words (e.g. yellow and blue) (see Signed Word Embeddings). An important distinction is that closeness here is about getting "interchangeability" not closeness in meaning, though they are often proxies.
To get to analogies, let's define them through ratios, as below:
a is to b is as A is to B gives P(w∣b)P(w∣a)=P(w∣B)P(w∣A)
dog is to puppy as cat is to kitten gives P(w∣puppy)P(w∣dog)=f(w∣age=cub)f(w∣age=adult)=P(w∣kitten)P(w∣cat)