{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%load_ext autoreload\n", "%autoreload 2\n", "\n", "import os\n", "import wget\n", "import zipfile\n", "import numpy as np\n", "import pandas as pd\n", "import networkx as nx\n", "import plotly.graph_objects as go\n", "from utils import *\n", "from collections import Counter\n", "from tqdm import tqdm\n", "import time\n", "\n", "# ignore warnings\n", "import warnings\n", "warnings.filterwarnings(\"ignore\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# import the graphs from the saved files. NOT TO BE INCLUDED IN THE FINAL NOTEBOOK\n", "G_brighkite_checkins = nx.read_gpickle(os.path.join('data', 'brightkite', 'brightkite_checkins_graph.gpickle'))\n", "G_gowalla_checkins = nx.read_gpickle(os.path.join('data', 'gowalla', 'gowalla_checkins_graph.gpickle'))\n", "G_foursquareEU_checkins = nx.read_gpickle(os.path.join('data', 'foursquare', 'foursquareEU_checkins_graph.gpickle'))\n", "G_foursquareIT_checkins = nx.read_gpickle(os.path.join('data', 'foursquare', 'foursquareIT_checkins_graph.gpickle'))\n", "\n", "G_brighkite_friends = nx.read_gpickle(os.path.join('data', 'brightkite', 'brightkite_friendships_graph.gpickle'))\n", "G_gowalla_friends = nx.read_gpickle(os.path.join('data', 'gowalla', 'gowalla_friendships_graph.gpickle'))\n", "G_foursquareEU_friends = nx.read_gpickle(os.path.join('data', 'foursquare', 'foursquareEU_friendships_graph.gpickle'))\n", "G_foursquareIT_friends = nx.read_gpickle(os.path.join('data', 'foursquare', 'foursquareIT_friendships_graph.gpickle'))\n", "\n", "checkins_graphs = [G_brighkite_checkins, G_gowalla_checkins, G_foursquareEU_checkins, G_foursquareIT_checkins]\n", "friendships_graph = [G_brighkite_friends, G_gowalla_friends, G_foursquareIT_friends, G_foursquareEU_friends]\n", "\n", "graphs_all = checkins_graphs + friendships_graph\n", "\n", "analysis_results = pd.read_pickle('analysis_results.pkl')\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# Introduction\n", "\n", "## Graph Theory\n", "\n", "---\n", "\n", "## Aim of the project\n", "\n", "Given a social network, which of its nodes are more central? This question has been asked many times in sociology, psychology and computer science, and a whole plethora of centrality measures (a.k.a. centrality indices, or rankings) were proposed to account for the importance of the nodes of a network. \n", "\n", "These networks, typically generated directly or indirectly by human activity and interaction (and therefore hereafter dubbed social”), appear in a large variety of contexts and often exhibit a surprisingly similar structure. One of the most important notions that researchers have been trying to capture in such networks is “node centrality”: ideally, every node (often representing an individual) has some degree of influence or importance within the social domain under consideration, and one expects such importance to surface in the structure of the social network; centrality is a quantitative measure that aims at revealing the importance of a node.\n", "\n", "Among the types of centrality that have been considered in the literature, many have to do with distances between nodes. Take, for instance, a node in an undirected connected network: if the sum of distances to all other nodes is large, the node under consideration is peripheral; this is the starting point to define Bavelas's closeness centrality \\cite{closeness}, which is the reciprocal of peripherality (i.e., the reciprocal of the sum of distances to all other nodes). \n", "\n", "The role played by shortest paths is justified by one of the most well-known features of complex networks, the so-called small-world phenomenon. A small-world network is a graph where the average distance between nodes is logarithmic in the size of the network, whereas the clustering coefficient is larger (that is, neighborhoods tend to be denser) than in a random Erdős-Rényi graph with the same size and average distance. The fact that social networks (whether electronically mediated or not) exhibit the small-world property is known at least since Milgram's famous experiment and is arguably the most popular of all features of complex networks. For instance, the average distance of the Facebook graph was recently established to be just $4.74$.\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# The Erdős-Rényi model\n", "\n", "Before 1960, graph theory mainly dealt with the properties of specific individual graphs. In the 1960s, Paul Erdős and Alfred Rényi initiated a systematic study of random graphs. Random graph theory is, in fact, not the study of individual graphs, but the study of a statistical ensemble of graphs (or, as mathematicians prefer to call it, a \\emph{probability space} of graphs). The ensemble is a class consisting of many different graphs, where each graph has a probability attached to it. A property studied is said to exist with probability $P$ if the total probability of a graph in the ensemble possessing that property is $P$ (or the total fraction of graphs in the ensemble that has this property is $P$). This approach allows the use of probability theory in conjunction with discrete mathematics for studying graph ensembles. A property is said to exist for a class of graphs if the fraction of graphs in the ensemble which does not have this property is of zero measure. This is usually termed as a property of \\emph{almost every (a.e.)} graph. Sometimes the terms “almost surely” or “with high probability” are also used (with the former usually taken to mean that the residual probability vanishes exponentially with the system size). \n", "\n", "\n", "## Erdős-Rényi graphs\n", "\n", "Two well-studied graph ensembles are $G_{N,M}$, the ensemble of all graphs with $N$ nodes and $M$ edges, and $G_{N,p}$, the ensemble of all graphs with $N$ nodes and probability $p$ of any two nodes being connected. These two families, initially studied by Erdős and Rényi, are known to be similar if $M = \\binom{N}{2} p$, so as long $p$ is not too close to $0$ or $1$ they are referred to as ER graphs. \n", "\n", "An important attribute of a graph is the average degree, i.e., the average number of edges connected to each node. We will denote the degree of the ith node by $k_i$ and the average degree by $ \\langle r \\rangle $ . $N$-vertex graphs with $\\langle k \\rangle = O(N^0)$ are called sparse graphs. \n", "\n", "An interesting characteristic of the ensemble $G_{N,p}$ is that many of its properties have a related threshold function, $p_t(N)$, such that the property exists, in the “thermodynamic limit” of $N \\to \\infty$ with probability 0 if $p < p_t$ , and with probability $1$ if $p > p_t$ . This phenomenon is the same as the physical concept of a percolation phase transition. \n", "\n", "Another property is the average path length between any two nodes, which in almost every graph of the ensemble (with $\\langle k \\rangle > 1$ and finite) is of order $\\ln N$ . The small, logarithmic distance is actually the origin of the “small-world” phenomena that characterize networks.\n", "\n", "\n", "## Scale-free networks\n", "\n", "The Erdős-Rényi model has traditionally been the dominant subject of study in the field of random graphs. Recently, however, several studies of real-world networks have found that the ER model fails to reproduce many of their observed properties. One of the simplest properties of a network that can be measured directly is the degree distribution, or the fraction P(k) of nodes having k connections (degree $k$). A well-known result for ER networks is that the degree distribution is Poissonian,\n", "\n", "\\begin{equation}\n", " P(k) = \\frac{e^{z} z^k}{k!}\n", "\\end{equation}\n", "\n", "Where $z = \\langle k \\rangle$. is the average degree. Direct measurements of the degree distribution for real networks show that the Poisson law does not apply. Rather, often these nets exhibit a scale-free degree distribution:\n", "\n", "\\begin{equation}\n", " P(k) = ck^{-\\gamma} \\quad \\text{for} \\quad k = m, ... , K\n", "\\end{equation}\n", "\n", "Where $c \\sim (\\gamma -1)m^{\\gamma - 1}$ is a normalization factor, and $m$ and $K$ are the lower and upper cutoffs for the degree of a node, respectively. The divergence of moments higher then $\\lceil \\gamma -1 \\rceil$ (as $K \\to \\infty$ when $N \\to \\infty$) is responsible for many of the anomalous properties attributed to scale-free networks. \n", "\n", "All real-world networks are finite and therefore all their moments are finite. The actual value of the cutoff K plays an important role. It may be approximated by noting that the total probability of nodes with $k > K$ is of order $1/N$\n", "\n", "\\begin{equation}\n", " \\int_K^\\infty P(k) dk \\sim \\frac{1}{N}\n", "\\end{equation}\n", "\n", "This yields the result\n", "\n", "\\begin{equation}\n", " K \\sim m N^{1/(\\gamma -1)}\n", "\\end{equation}\n", "\n", "The degree distribution alone is not enough to characterize the network. There are many other quantities, such as the degree-degree correlation (between connected nodes), the spatial correlations, the clustering coefficient, the betweenness or central-ity distribution, and the self-similarity exponents.\n", "\n", "# Diameter and fractal dimension\n", "\n", "Regular lattices can be viewed as networks embedded in Euclidean space, of a well-defined dimension, $d$. This means that $n(r)$, the number of nodes within a distance $r$ from an origin, grows as $n(r) \\sim r^d$ (for large $r$). For fractal objects, $d$ in the last relation may be a non-integer and is replaced by the fractal dimension $d_f$ \n", "\n", "An example of a network where the above power laws are not valid is the Cayley tree (also known as the Bethe lattice). The Cayley tree is a regular graph, of fixed degree $z$, and no loops. An infinite Cayley tree cannot be embedded in a Euclidean space of finite dimensionality. The number of nodes at $l$ is $n(l) \\sim (z - 1)^l$ . Since the exponential growth is faster than any power law, Cayley trees are referred to as infinite-dimensional systems. \n", "\n", "In most random network models, the structure is locally tree-like (since most loops occur only for $n(l) \\sim N$), and since the number of nodes grows as $n(l) \\sim \\langle k - 1 \\rangle^l$, they are also infinite dimensional. As a consequence, the diameter of such graphs (i.e., the minimal path between the most distant nodes) scales as $D \\sim \\ln N$. Many properties of ER networks, including the logarithmic diameter, are also present in Cayley trees. This small diameter in ER graphs and Cayley trees is in contrast to that of finite-dimensional lattices, where $D \\sim N^{1/d_l}$. \n", "\n", "Similar to ER, percolation on infinite-dimensional lattices and the Cayley tree yields a critical threshold $p_c = 1/(z - 1)$. For $p > p_c$, a “giant cluster” of order $N$ exists, whereas for $p < pc$,only small clusters appear. For infinite-dimensional lattices (similar to ER networks) at criticality, $p =\n", "p_c$ , the giant component is of size $N^{2/3}$. This last result follows from the fact that percolation on lattices in dimension $d \\geq d_c = 6$ is in the same universality class as infinite-dimensional percolation, where the fractal dimension of the giant cluster is $d_f = 4$, and therefore the size of the giant cluster scales as $N^{d_f/d_c} = N^{2/3}$. The dimension $d_c$ is called the “upper critical dimension.” Such an upper critical dimension exists not only in percolation phenomena, but also in other physical models, such as in the self-avoiding walk model for polymers and in the Ising model for magnetism; in both these cases $d_c = 4$.\n", "\n", "Watts and Strogatz suggested a model that retains the local high clustering of lattices (i.e., the neighbors of a node have a much higher probability of being neighbors than in random graphs) while reducing the diameter to $D \\sim \\ln N$ . This so-called, “small-world network” is achieved by replacing a fraction $\\varphi$ of the links in a regular lattice with random links, to random distant neighbors. \n", "\n", "## Random graphs as a model of real networks\n", "\n", "Many natural and man-made systems are networks, i.e., they consist of objects and interactions between them. These include computer networks, in particular the Internet, logical networks, such as links between WWW pages, and email networks, where a link represents the presence of a person's address in another person's address book. Social interactions in populations, work relations, etc. can also be modeled by a network structure. Networks can also describe possible actions or movements of a system in a configuration space (a phase space), and the nearest configurations are connected by a link. All the above examples and many others have a graph structure that can be studied. Many of them have some ordered structure, derived from geographical or geometrical considerations, cluster and group formation, or other specific properties. However, most of the above networks are far from regular lattices and are much more complex and random in structure. Therefore, it can be assumed (with a lot of precaution) that they maintain many properties of the appropriate random graph model. \n", "\n", "In many aspects scale-free networks can be regarded as a generalization of ER networks. For large $\\gamma$ (usually, for $\\gamma > 4$) the properties of scale-free networks, such as distances, optimal paths, and percolation, are the same as in ER networks. In contrast, for $\\gamma < 4$, these properties are very different and can be regarded as anomalous. The anomalous behavior of scale-free networks is due to the strong heterogeneity in the degree of the nodes, which breaks the node-to-node translational homogeneity (symmetry) that exists in the classical\n", "homogeneous networks, such as lattices, Cayley trees, and ER graphs. The small variation of the degrees in the ER model or in scale-free networks with large $gamma$ is insufficient to break this symmetry, and therefore many results for ER networks are the same as for Cayley trees, where the degree of each node is the same.\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Discovering the datasets" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "To perform our analysis, we will use the following datasets:\n", "\n", "- **Brightkite**\n", "- **Gowalla**\n", "- **Foursquare**\n", "\n", "We can download the datasets using the function `download_dataset` from the `utils` module. It will download the datasets in the `data` folder, organized in sub-folders in the following way:\n", "\n", "```\n", "data\n", "├── brightkite\n", "│ ├── brightkite_checkins.txt\n", "│ └── brightkite_friends_edges.txt\n", "├── foursquare\n", "│ ├── foursquare_checkins.txt\n", "│ ├── foursquare_friends_edges.txt\n", "│ └── raw_POIs.txt\n", "└── gowalla\n", " ├── gowalla_checkins.txt\n", " └── gowalla_friends_edges.txt\n", "```\n", "\n", "If any of the datasets is already downloaded, it will not be downloaded again. For further details about the function below, please refer to the `utils` module.\n", "\n", "> NOTE: the Stanford servers tends to be slow, so it may take a while to download the datasets. It's gonna take about 5 minutes to download all the datasets.\n", "\n", "---\n", "\n", "### A deeper look at the datasets\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "download_datasets()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Let's have a deeper look at them.\n", "\n", "## Brightkite\n", "\n", "[Brightkite](http://www.brightkite.com/) was once a location-based social networking service provider where users shared their locations by checking-in. The friendship network was collected using their public API. We will work with two different datasets. This is how they look like after being filtered by the `download_dataset` function:\n", "\n", "- `data/brightkite/brightkite_checkins.txt`: the checkins, a tsv file with 2 columns of user id and location. This is not in the form of a graph edge list, in the next section we will see how to convert it into a graph. Originally there were other columns, such as the time of the checkins. During the filtering, we used this information to extract only the checkins from 2009 and then deleted it. This is why the number of checkins is smaller than the original dataset. \n", " \n", "- `data/brightkite/brightkite_friends_edges.txt`: the friendship network, a tsv file with 2 columns of users ids. This file it's untouched by the function, it's in the form of a graph edge list." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Gowalla\n", "\n", "Gowalla is a location-based social networking website where users share their locations by checking-in. The friendship network is undirected and was collected using their public API. As for Brightkite, we will work with two different datasets. This is how they look like after being filtered by the `download_dataset` function:\n", "\n", "- `data/gowalla/gowalla_checkins.txt`: the checkins, a tsv file with 2 columns of user id and location. This is not in the form of a graph edge list. Originally there were other columns, such as the time of the checkins. During the filtering, we used this information to extract only the checkins from 2009 and then deleted it. This is why the number of checkins is smaller than the original dataset. \n", "\n", "- `data/gowalla/gowalla_friends_edges.txt`: the friendship network, a tsv file with 2 columns of users ids. This file it's untouched by the function, it's in the form of a graph edge list. In the next section when we will build the friendship network, we will only consider the users that have at least one check-in in 2009 to avoid having biases in the analysis." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Foursquare\n", "\n", "[Foursquare](https://foursquare.com/) is a location-based social networking website where users share their locations by checking-in. This dataset includes long-term (about 22 months from Apr. 2012 to Jan. 2014) global-scale check-in data collected from Foursquare, and also two snapshots of user social networks before and after the check-in data collection period (see more details in our paper). We will work with three different datasets:\n", "\n", "- `data/foursquare/foursquare_checkins.txt`: a tsv file with 2 columns of user id and location. This is not in the form of a graph edge list. This fill will remain untouched by the function but due to its size, in the next sections we will focus on the EU sub-sample and the IT sub-sample. The friendship edge list will be modified accordingly.\n", "\n", "- `data/foursquare/foursquare_friends_edges.txt`: the friendship network, a tsv file with 2 columns of users ids. This is in the form of a graph edge list. \n", "\n", "- `data/foursquare/raw_POIs.txt`: the POIS, a tsv file with 2 columns of location and country ISO. We are going to use this file to create the sub-samples of the dataset.\n", "\n", "> **NOTE:** In this case I preferred not to take sub-samples based on time. The reason is that there may be a period of time where the social network was not very popular in some countries, so the analysis may be biased. Instead, I decided to take sub-samples based on the country. In this way I have a more homogeneous dataset." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Building the networks" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "We are going to build the the networks for the three datasets as an undirected graph $M = (V, E)$, where $V$ is the set of nodes and $E$ is the set of edges. The nodes represent the users and the edges indicates that two individuals visited the same location at least once.\n", "\n", "The check-ins files of the three datasets are not in the form of a graph edge list, so we need to manipulate them. Let's have a look at the number of lines of each file (note that gowalla and brightkite are already filtered)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def count_lines_and_unique_elements(file):\n", " df = pd.read_csv(file, sep='\\t', header=None)\n", " print('Number of lines: ', len(df))\n", " print('Number of unique elements: ', len(df[0].unique()))\n", "\n", "gowalla_path = os.path.join('data', 'gowalla', 'gowalla_checkins.txt')\n", "brightkite_path = os.path.join('data', 'brightkite', 'brightkite_checkins.txt')\n", "foursquare_path = os.path.join('data', 'foursquare', 'foursquare_checkins.txt')\n", "\n", "_ = [gowalla_path, brightkite_path, foursquare_path]\n", "\n", "for path in _:\n", " print(path.split(os.sep)[-2])\n", " count_lines_and_unique_elements(path)\n", " print()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "We would like to build a graph starting from an edge list. To do that, we are going to check, for each venue, all the users that visited it. Then, we will create an edge between each pair of users that visited the same venue (avoiding repetitions). This can be easily done in python, but it's going to be a bit slow (this is why we are considering sub-samples of the datasets). Let's see how to do it.\n", "\n", "```python\n", "# let df be the dataframe [\"user_id\", \"venue_id\"] of the checkins\n", "\n", "venues_users = df.groupby(\"venue_id\")[\"user_id\"].apply(set)\n", "\n", " for users in venues_users:\n", " for user1, user2 in combinations(users, 2):\n", " G.add_edge(user1, user2)\n", "```\n", "\n", "It the `utilis.py` module, we have a function that does exactly this called `create_graph_from_checkins`. It takes as input the name of the dataset and returns a networkx graph object. By default it will also write the edge list to a file in the respective dataset folder. The options are\n", "\n", "- `brightkite`\n", "- `gowalla`\n", "- `foursquareEU`\n", "- `foursquareIT`\n", "\n", "Let's see how it works:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# It takes about 3 minutes to create the all the 4 graphs on a i7-8750H CPU\n", "\n", "G_brighkite_checkins = create_graph_from_checkins('brightkite')\n", "G_brighkite_checkins.name = 'Brightkite Checkins Graph'\n", "\n", "G_gowalla_checkins = create_graph_from_checkins('gowalla')\n", "G_gowalla_checkins.name = 'Gowalla Checkins Graph'\n", "\n", "G_foursquareEU_checkins = create_graph_from_checkins('foursquareEU')\n", "G_foursquareEU_checkins.name = 'Foursquare EU Checkins Graph'\n", "\n", "G_foursquareIT_checkins = create_graph_from_checkins('foursquareIT')\n", "G_foursquareIT_checkins.name = 'Foursquare IT Checkins Graph'" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Friendship network\n", "\n", "Now we want to create a graph where two users are connected if they are friends in the social network. We are intending the concept of friendship in a \"facebook way\", not a \"twitter way\". Less empirically, the graphs is not going to be directed and the edges are not going to be weighted. A user can't be friend with himself, and can't be friend with a user without the user being friend with him.\n", "\n", "Since we filtered the checkins for foursquare and gowalla, we are considering only the users that are also present in the check-ins graph. We can build this graph with the function `create_friendships_graph` in the `utils.py` module. It takes as input the name of the dataset and returns a networkx graph object. By default it will also write the edge list to a file in the respective dataset folder. The options are\n", "\n", "- `brightkite`\n", "- `gowalla`\n", "- `foursquareEU`\n", "- `foursquareIT`\n", "\n", "> **NOTE:** This functions is implemented without the necessity of the checkins graphs being loaded in memory, it uses the edge list file. This choice was made since someone may want to perform some analysis only on the friendship network and so there is no need to load the checkins graph and waste memory. Furthermore, networkx is tremendously slow when loading a graph from an edge list file (since it's written in pure python), so this choice is also motivated by the speed of the function.\n", "\n", "Let's see how it works:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "G_brighkite_friends = create_friendships_graph('brightkite')\n", "print(\"Computation done for Brightkite friendship graph\")\n", "G_brighkite_friends.name = 'Brightkite Friendship Graph'\n", "\n", "\n", "G_gowalla_friends = create_friendships_graph('gowalla')\n", "print(\"Computation done for (filtered) Gowalla friendship graph\")\n", "G_gowalla_friends.name = '(Filtered) Gowalla Friendship Graph'\n", "\n", "\n", "G_foursquareIT_friends = create_friendships_graph('foursquareIT')\n", "print(\"Computation done for Foursquare IT friendship graph\")\n", "G_foursquareIT_friends.name = 'Foursquare IT Friendship Graph'\n", "\n", "\n", "G_foursquareEU_friends = create_friendships_graph('foursquareEU')\n", "print(\"Computation done for Foursquare EU friendship graph\")\n", "G_foursquareEU_friends.name = 'Foursquare EU Friendship Graph'" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have our graphs, let's have a look at some basic information about them" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for G in [G_brighkite_friends, G_gowalla_friends, G_foursquareIT_friends, G_foursquareEU_friends]:\n", " print(G.name)\n", " print('Number of nodes: ', G.number_of_nodes())\n", " print('Number of edges: ', G.number_of_edges())\n", " print()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# Proprieties of the structure of the networks\n", "" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Introduzione da scrivere\n", "\n", "To help us visualize the results of our analysis we can create a dataframe and fill it with all the information that we will retrive from our networks in this section.\n", "\n", "As we'll see in the cells below, the full networks are very big, even after the filtering that we did. This leads to long run times for the functions that we are going to use. To avoid this, we are going to use a sub-sample of the networks. Depending on how much we want to sample, our results will be more or less accurate. \n", "\n", "What I suggest to do while reviewing this network is to use higher values for the sampling rate, so that you can see the results faster. This will give you a general idea of how the implemented functions work. Then, at the end of this section I have provided a link from my GitHub repository where you can download the results obtained with very low sampling rates. In this way you can test the functions with mock-networks and see if they work as expected, then we can proceed with the analysis using the more accurate results." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "analysis_results = pd.DataFrame(columns=['Graph', 'Number of Nodes', 'Number of Edges', 'Average Degree', 'Average Clustering Coefficient', 'log N', 'Average Shortest Path Length', 'betweenness centrality'], index=None)\n", "\n", "checkins_graphs = [G_brighkite_checkins, G_gowalla_checkins, G_foursquareEU_checkins, G_foursquareIT_checkins]\n", "friendships_graph = [G_brighkite_friends, G_gowalla_friends, G_foursquareIT_friends, G_foursquareEU_friends]\n", "\n", "graphs_all = checkins_graphs + friendships_graph\n", "\n", "for graph in graphs_all:\n", " analysis_results = analysis_results.append(\n", " {'Graph': graph.name, \n", " 'Number of Nodes': graph.number_of_nodes(), \n", " 'log N': np.log(graph.number_of_nodes()),\n", " 'Number of Edges': graph.number_of_edges()}, \n", " ignore_index=True)\n", "\n", "analysis_results" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Average Degree\n", "\n", "The degree of a node is the number of links connected to it. The average degree alone, is not very useful for our future analysis, so we won't spend much time about it. In the next section we will see that the degree distribution is a much more useful measure.\n", "\n", "The degree distribution, $P(k)$, is the fraction of sites having degree $k$. We know from the literature that many real networks do not exhibit a Poisson degree distribution, as predicted in the ER model. In fact, many of them exhibit a distribution with a long, power-law, tail, $P(k) \\sim k^{-\\gamma}$ with some $γ$, usually between $2$ and 3$.\n", "\n", "For know, we will just compute the average degree of our networks and add it to the dataframe." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for G in graphs_all:\n", " avg_deg = np.mean([d for n, d in G.degree()])\n", " analysis_results.loc[analysis_results['Graph'] == G.name, 'Average Degree'] = avg_deg\n", "\n", "analysis_results" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Clustering coefficient\n", "\n", "The clustering coefficient is usually related to a community represented by local structures. The usual definition of clustering is related to the number of triangles in the network. The clustering is high if two nodes sharing a neighbor have a high probability of being connected to each other. There are two common definitions of clustering. The first is global,\n", "\n", "\\begin{equation}\n", " C = \\frac{3 \\times \\text{the number of triangles in the network}}{\\text{the number of connected triples of vertices}}\n", "\\end{equation}\n", "\n", "where a “connected triple” means a single vertex with edges running to an unordered\n", "pair of other vertices. \n", "\n", "A second definition of clustering is based on the average of the clustering for single nodes. The clustering for a single node is the fraction of pairs of its linked neighbors out of the total number of pairs of its neighbors:\n", "\n", "\\begin{equation}\n", " C_i = \\frac{\\text{the number of triangles connected to vertex }i}{\\text{the number of triples centered on vertex } i}\n", "\\end{equation}\n", "\n", "For vertices with degree $0$ or $1$, for which both numerator and denominator are zero, we use $C_i = 0$. Then the clustering coefficient for the whole network is the average\n", "\n", "\\begin{equation}\n", " C = \\frac{1}{n} \\sum_{i} C_i\n", "\\end{equation}\n", "\n", "In both cases the clustering is in the range $0 \\leq C \\leq 1$. \n", "\n", "In random graph models such as the ER model and the configuration model, the clustering coefficient is low and decreases to $0$ as the system size increases. This is also the situation in many growing network models. However, in many real-world networks the clustering coefficient is rather high and remains constant for large network sizes. This observation led to the introduction of the small-world model, which offers a combination of a regular lattice with high clustering and a random graph. \n", "\n", "---\n", "\n", "As one can imagine by the definition given above, this operation is very expensive. The library `networkx` provides a function to compute the clustering coefficient of a graph. In particular, the function `average_clustering` computes the average clustering coefficient of a graph. Unfortunately, since our dataset (even after sub-sampling) are too big to be processed exactly in decent times.\n", "\n", "We can use the `average_clustering` function from the `utils` module to compute the average clustering coefficient on a random sub-sample of the graph. The functions takes as input:\n", "\n", "- `G: networkx graph object`: the graph on which we want to compute the average clustering coefficient\n", "- `k: int (default=None)`: percentage of nodes to remove from the graph. If k is None, the average clustering coefficient of each connected component is computed using all the nodes of the connected component.\n", "\n", "And returns:\n", "\n", "- `float`: the average clustering coefficient of the graph\n", "\n", "Depending on the machine and the time available, we can choose different values for `k`. Lower values will give us a more precise result, but will take longer to compute. On the other hand, higher values will give us a less precise result, but will be faster to compute. I suggest to use `k=0.9` to test very quickly the function, and at least `k=0.6` to get a more precise result.\n", "\n", "> Since the checkins graphs are way bigger then the friendship graphs, I created two for loop to compute the average clustering coefficient with different values of `k`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# With k = 0.6 for checkins graphs and k = 0.2 for friendship graphs it takes about 8 minutes to compute the average clustering coefficient of alla the graphs on a i7-8750H CPU. Since we are taking random samplings, this of course depends on the random seed.\n", "\n", "for graph in checkins_graphs:\n", " print(\"\\nComputing average clustering coefficient for the {}...\".format(graph.name))\n", " start = time.time()\n", " avg_clustering = average_clustering_coefficient(graph, 0.3)\n", " end = time.time()\n", "\n", " print(\"\\tAverage clustering coefficient: {}\".format(avg_clustering))\n", " print(\"\\tCPU time: \" + str(round(end-start,1)) + \" seconds\")\n", " analysis_results.loc[analysis_results['Graph'] == graph.name, 'Average Clustering Coefficient'] = avg_clustering\n", "\n", "for graph in friendships_graph:\n", " print(\"\\nComputing average clustering coefficient for the {}...\".format(graph.name))\n", " start = time.time()\n", " avg_clustering = average_clustering_coefficient(graph, 0.1)\n", " end = time.time()\n", "\n", " print(\"\\tAverage clustering coefficient: {}\".format(avg_clustering))\n", " print(\"\\tCPU time: \" + str(round(end-start,1)) + \" seconds\")\n", " analysis_results.loc[analysis_results['Graph'] == graph.name, 'Average Clustering Coefficient'] = avg_clustering\n", "\n", "analysis_results\n", "# save the results as pandas dataframe object\n", "analysis_results.to_pickle('analysis_results.pkl')" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Average Path Length\n", "\n", "Since we are considering our networks as _not_ embedded in real space (even if we could theoretically), the geometrical distance between nodes is meaningless. The most important distance measure in such networks is the minimal number of hops (or chemical distance). That is, the distance between two nodes in the network is defined as the number of edges in the shortest path between them. If the edges are assumed to be weighted, the lowest total weight path, called the _optimal path_, may also be used. The usual mathematical definition of the diameter of the network is the length of the path between the farthest nodes in the network.\n", "\n", "In the next section, we'll see how to characterize this distance in a small world network. \n", "\n", "--- \n", "\n", "The `networkx` library provides a function to compute the average shortest path length of a graph. In particular, the function `average_shortest_path_length` computes the average shortest path length of a graph. Unfortunately, as always, there are some limitations. The function can only be applied to connected graphs and since we are taking sub-samples of our datasets, there is a probability that the sub-sample is not connected. Another problem is that this operation is very expensive! The shortest path length is defined as\n", "\n", "$$ a = \\sum_{s \\in V} \\sum_{t \\in V} \\frac{d(s,t)}{n(n-1)} $$\n", "\n", "Where $V$ is the set of nodes in the graph, $n$ is the number of nodes in the graph, and $d(s,t)$ is the shortest path length between nodes $s$ and $t$. The default (and we are going to use) algorithm to compute the shortest path length is the Dijkstra algorithm. \n", "\n", "Since we are interested in the average shortest path length of all our connected components, for each node we need to run the Dijkstra algorithm on all the other nodes. Given the dimensions of our datasets and the slowness of networkx, computing the average shortest path length of the whole graph is not feasible.\n", "\n", "To overcome this problem, we can use the `average_shortest_path` function from the `utils` module to compute the average shortest path length on a random sub-sample of the graph. The functions takes as input:\n", "\n", "- `G: networkx graph object`: the graph on which we want to compute the average shortest path length\n", "- `k: int (default=None)`: percentage of nodes to remove from the graph. If k is None, the average shortest path length of each connected component is computed using all the nodes of the connected component.\n", "\n", "And returns:\n", "\n", "- `float`: the average shortest path length of the graph\n", "\n", "The implementation is very straightforward. First we remove a random sub-sample of the nodes from the graph. Then we create a list with all the connected components of the sub-sampled graph with at least 10 nodes and finally we compute the average shortest path length using the networkx function `average_shortest_path_length`. The choice of 10 nodes is arbitrary and based on empirical observations. We do that to avoid creating small communities with a very low average shortest path length that could bias our results.\n", "\n", "Depending on the machine and the time available, we can choose different values for `k`. Lower values will give us a more precise result, but will take longer to compute. On the other hand, higher values will give us a less precise result, but will be faster to compute. I suggest to use `k=0.9` to test very quickly the function, and at least `k=0.6` to get a more precise result.\n", "\n", "> Since the checkins graphs are way bigger then the friendship graphs, I created two for loop to compute the average clustering coefficient with different values of `k`.\n", "\n", "\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# With k = 0.6 for checkins graphs and k = 0.2 for friendship graphs it takes about 18 minutes end for all the graphs on a i7-8750H CPU. Since we are taking random samplings, this of course depends on the random seed\n", "\n", "for graph in checkins_graphs:\n", " print(\"\\nComputing average shortest path length for graph: \", graph.name)\n", "\n", " start = time.time()\n", " average_shortest_path_length = average_shortest_path(graph, 0.3)\n", " end = time.time()\n", "\n", " print(\"\\tAverage shortest path length: {}\".format(round(average_shortest_path_length,2)))\n", " print(\"\\tCPU time: \" + str(round(end-start,1)) + \" seconds\")\n", "\n", " \n", " analysis_results.loc[analysis_results['Graph'] == graph.name, 'Average Shortest Path Length'] = average_shortest_path_length\n", "\n", "for graph in friendships_graph:\n", " print(\"\\nComputing average shortest path length for graph: \", graph.name)\n", "\n", " start = time.time()\n", " average_shortest_path_length = average_shortest_path(graph, 0.1)\n", " end = time.time()\n", "\n", " print(\"\\tAverage shortest path length: {}\".format(round(average_shortest_path_length,2)))\n", " print(\"\\tCPU time: \" + str(round(end-start,1)) + \" seconds\")\n", "\n", " \n", " analysis_results.loc[analysis_results['Graph'] == graph.name, 'Average Shortest Path Length'] = average_shortest_path_length\n", "\n", "analysis_results\n", "# save the results as pandas dataframe object\n", "analysis_results.to_pickle('analysis_results.pkl')" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Betweenness Centrality\n", "\n", "The importance of a node in a network depends on many factors. A website may be important due to its content, a router due to its capacity. Of course, all of these properties depend on the nature\n", "of the studied network, and may have very little to do with the graph structure of the network. We are particularly interested in the importance of a node (or a link) due to its topological function in the network. It is reasonable to assume that the topology of a network may dictate some intrinsic importance for different nodes. One measure of centrality can be the degree of a\n", "node. The higher the degree, the more the node is connected, and therefore, the higher is its centrality in the network. However, the degree is not the only factor determining a node's importance \n", "\n", "One of the most accepted definitions of centrality is based on counting paths going through a node. For each node, i, in the network, the number of “routing” paths to all other nodes (i.e., paths through which data flow) going through i is counted, and this number determines the centrality i. The most common selection is taking only\n", "the shortest paths as the routing paths. This leads to the following definition: the \\emph{betweenness centrality} of a node, i, equals the number of shortest paths between all pairs of nodes in the network going through it, i.e.,\n", "\n", "\\begin{equation} \n", " g(i) = \\sum_{\\{ j,k \\}} g_i (j,k)\n", "\\end{equation}\n", "\n", "where the notation $\\{j, k\\}$ stands for summing each pair once, ignoring the order, and $g_i(j, k)$ equals $1$ if the shortest path between nodes $j$ and $k$ passes through node $i$ and $0$ otherwise. In fact, in networks with no weight (i.e., where all edges have the same length), there might be more than one shortest path. In that case, it is common to take $g_i(j, k) = C_i(j,k)/C(j,k)$, where $C(j,k)$ is the number of shortest paths between $j$ and $k$, and $C_i(j,k)$ is the number of those going through $i$. \\footnote{Several variations of this scheme exist, focusing, in particular, on how to count distinct shortest paths (if several shortest paths share some edges). These differences tend to have a very small statistical influence in random complex networks, where the number of short loops is small. Therefore, we will concentrate on the above definition. Another nuance is whether the source and destination are considered part of the shortest path.\n", "\n", "The usefulness of the betweenness centrality in identifying bottlenecks and important nodes in the network has led to applications in identifying communities in biological and social networks.\n", "\n", "--- \n", "\n", "Let's see how to compute this centrality measure on our networks. The networkx library has a function that computes the betweenness centrality of all nodes in a network. It is based on the algorithm proposed in the paper\n", "\n", "_- Ulrik Brandes, A Faster Algorithm for Betweenness Centrality, Journal of Mathematical Sociology, 25(2):163-177, 2001._\n", "\n", "Even if this is a very fast algorithm, it's node enough to run in a reasonable time on large networks. Using the same idea of the previous sections, we can take samplings of our original graph, obtaining an approximate results. Unfortunately, I observed that even with heavy sampling, the time required to run the algorithm is still very high. To avoid using even more heavier samplings (that would bias the results), I decided to use a different approach: parallelization!\n", "\n", "In the `utils` module I implemented a function called `betweenness_centrality_parallel`. The function takes as input\n", "\n", "- `G: networkx graph object`: the graph on which we want to compute the average shortest path length\n", "- `processes : int (optional)` The number of processes to use for computation. If `None` (default), processes is set to 1 and the standard betweenness algorithm is used.\n", "- `k: int (default=None)`: percentage of nodes to remove from the graph. If k is None, the average shortest path length of each connected component is computed using all the nodes of the connected component.\n", "\n", "> **Memory Note:** Do not use more then 6 process for big graphs, otherwise the memory will be full. Do it only if you have more at least 32 GB of RAM. For small graphs, you can use more processes.\n", "\n", "The implemented functions divide the network in chunk of nodes and compute their contribution to the betweenness centrality of the whole network. Each chunk is computed in parallel, and the results are summed up to obtain the final result. The function returns a dictionary with the betweenness centrality of each node. For more information, see the function code in the `utils` module.\n", "\n", "Depending on the machine and the time available, we can choose different values for `k`. Lower values will give us a more precise result, but will take longer to compute. On the other hand, higher values will give us a less precise result, but will be faster to compute. I suggest to use `k=0.9` to test very quickly the function, and at least `k=0.6` to get a more precise result.\n", "\n", "> Since the checkins graphs are way bigger then the friendship graphs, I created two for loop to compute the average clustering coefficient with different values of `k`.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# it takes about 6 minutes to compute the betweenness centrality for all the graphs with 6 processes with k = 0.7. Change the value of k to speed up the computation (at the cost of accuracy). \n", "\n", "for graph in checkins_graphs:\n", " print(\"\\nComputing the approximate betweenness centrality for the {}...\".format(graph.name))\n", " start = time.time()\n", " betweenness_centrality = np.mean(list(betweenness_centrality_parallel(graph, 6, k = 0.3).values()))\n", " end = time.time()\n", " print(\"\\tBetweenness centrality: {} \".format(betweenness_centrality))\n", " print(\"\\tCPU time: \" + str(round(end-start,1)) + \" seconds\")\n", "\n", " analysis_results.loc[analysis_results['Graph'] == graph.name, 'betweenness centrality'] = betweenness_centrality\n", "\n", "for graph in friendships_graph:\n", " print(\"\\nComputing the approximate betweenness centrality for the {}...\".format(graph.name))\n", " start = time.time()\n", " betweenness_centrality = np.mean(list(betweenness_centrality_parallel(graph, 6, k = 0.1).values()))\n", " end = time.time()\n", " print(\"\\tBetweenness centrality: {} \".format(betweenness_centrality))\n", " print(\"\\tCPU time: \" + str(round(end-start,1)) + \" seconds\")\n", "\n", " analysis_results.loc[analysis_results['Graph'] == graph.name, 'betweenness centrality'] = betweenness_centrality\n", " \n", "analysis_results\n", "# save the results as pandas dataframe object\n", "analysis_results.to_pickle('analysis_results.pkl')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "acc_res = \"some urls\"\n", "\n", "# download the results with wget\n", "\n", "# open the dataframe object\n", "analysis_results = pd.read_pickle('analysis_results_acc.pkl')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Analysis of the results" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Distribution of Degree\n", "\n", "\n", "The Erdős-Rényi model has traditionally been the dominant subject of study in the field of random graphs. Recently, however, several studies of real-world networks have found that the ER model fails to reproduce many of their observed properties. One of the simplest properties of a network that can be measured directly is the degree distribution, or the fraction $P(k)$ of nodes having k connections (degree $k$). A well-known result for ER networks is that the degree distribution is Poissonian,\n", "\n", "\\begin{equation}\n", " P(k) = \\frac{e^{z} z^k}{k!}\n", "\\end{equation}\n", "\n", "Where $z = \\langle k \\rangle$. is the average degree. Direct measurements of the degree distribution for real networks show that the Poisson law does not apply. Rather, often these nets exhibit a scale-free degree distribution:\n", "\n", "\\begin{equation}\n", " P(k) = ck^{-\\gamma} \\quad \\text{for} \\quad k = m, ... , K\n", "\\end{equation}\n", "\n", "Where $c \\sim (\\gamma -1)m^{\\gamma - 1}$ is a normalization factor, and $m$ and $K$ are the lower and upper cutoffs for the degree of a node, respectively. The divergence of moments higher then $\\lceil \\gamma -1 \\rceil$ (as $K \\to \\infty$ when $N \\to \\infty$) is responsible for many of the anomalous properties attributed to scale-free networks. \n", "\n", "All real-world networks are finite and therefore all their moments are finite. The actual value of the cutoff K plays an important role. It may be approximated by noting that the total probability of nodes with $k > K$ is of order $1/N$\n", "\n", "\\begin{equation}\n", " \\int_K^\\infty P(k) dk \\sim \\frac{1}{N}\n", "\\end{equation}\n", "\n", "This yields the result\n", "\n", "\\begin{equation}\n", " K \\sim m N^{1/(\\gamma -1)}\n", "\\end{equation}\n", "\n", "---\n", "\n", "Let's see if our networks are scale-free or not. We can use the `degree_distribution` function from the `utils` module to plot the degree distribution of a graph. It takes a networkx graph object as input and returns a plot of the degree distribution. We expect to see a power-law distribution and not a Poissonian one." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for G in checkins_graphs:\n", " degree_distribution(G)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for graph in friendships_graph:\n", " degree_distribution(graph)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can clearly see from the graphs obtained, the degree distribution of the networks is not Poissonian, but rather scale-free. This is a good indication that the networks are not random, but rather small-world.\n", "\n", "Let's try to plot the distribution degree of a random Erdos-Renyi graph with the same number of nodes and a probability of edge creation equal to the number of edges of the network divided by the number of possible edges. We expect to see a Poissonian distribution.\n", "\n", "> This is a time saving approach, NOT a rigorous one. If we want to be rigorous, should follow the algorithm proposed by Maslov and Sneppen, implemented in the the networkx function `random_reference`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# for each network, create a erdos-renyi model of the original. If you want to test it with the watts-strogatz model, uncomment the code below and comment the first 2 lines of the for loop\n", "\n", "for graph in checkins_graphs:\n", "\n", " p = G.number_of_edges() / (G.number_of_nodes())\n", " avg_degree = int(np.mean([d for n, d in G.degree()]))\n", " G = nx.watts_strogatz_graph(G.number_of_nodes(), avg_degree, p)\n", " G.name = graph.name + \" Watts-Strogatz\"\n", "\n", " # G = nx.erdos_renyi_graph(graph.number_of_nodes(), nx.density(graph))\n", " # G.name = graph.name + \" Erdos-Renyi\"\n", " print(G.name)\n", " print(\"Number of nodes: \", G.number_of_nodes())\n", " print(\"Number of edges: \", G.number_of_edges())\n", " degree_distribution(G, log=False)\n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# for each network, create a erdos-renyi model of the original graph. If you want to test it with the watts-strogatz model, uncomment the code below and comment the first 2 lines of the for loop\n", "\n", "for graph in friendships_graph:\n", "\n", " p = G.number_of_edges() / (G.number_of_nodes())\n", " avg_degree = int(np.mean([d for n, d in G.degree()]))\n", " G = nx.watts_strogatz_graph(G.number_of_nodes(), avg_degree, p)\n", " G.name = graph.name + \" Watts-Strogatz\"\n", "\n", " # G = nx.erdos_renyi_graph(graph.number_of_nodes(), nx.density(graph))\n", " # G.name = graph.name + \" Erdos-Renyi\" \n", "\n", " print(G.name)\n", " print(\"Number of nodes: \", G.number_of_nodes())\n", " print(\"Number of edges: \", G.number_of_edges())\n", " degree_distribution(G, log=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is a Poissonian distribution, as expected.\n", "\n", "The degree distribution alone is not enough to characterize the network. There are many other quantities, such as the degree-degree correlation (between connected nodes), the spatial correlations, the clustering coefficient, the betweenness or central-ity distribution, and the self-similarity exponents." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## The Small-World Model\n", "\n", "It should be clarified that real networks are not random. Their formation and development are dictated by a combination of many different processes and influences. These influencing conditions include natural limitations and processes, human considerations such as optimal performance and robustness, economic considerations, natural selection and many others. Controversies still exist regarding the measure to which random models represent real-world networks. However, in this section we will focus on random network models and attempt to show if their properties may still be used to study properties of our real-world networks. \n", "\n", "Many real-world networks have many properties that cannot be explained by the ER model. One such property is the high clustering observed in many real-world networks. This led Watts and Strogatz to develop an alternative model, called the “small-world” model. Quoting their paper:\n", "\n", "> \"highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs\"\n", "\n", "Their idea was to begin with an ordered lattice, such as the $k$-ring (a ring where each site is connected to its $2k$ nearest neighbors - $k$ from each side) or the two-dimensional lattice. For each site, each of the links emanating from it is removed with probability $\\varphi$ and is rewired to a randomly selected site in the network. In other words, small-world networks have the unique ability to have specialized nodes or regions within a network while simultaneously exhibiting shared or distributed processing across all of the communicating nodes within a network. " ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Small-Worldness" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Given the unique processing or information transfer capabilities of small-world networks, it is vital to determine whether this is a universal property of naturally occurring networks or whether small-world properties are restricted to specialized networks. An overly liberal definition of small-worldness might miss the specific benefits of these networks\n", "\n", "> high clustering and low path length\n", "\n", "and obscure them with networks more closely associated with regular lattices and random networks. A possible definition of a small-world network is that it has clustering similar to a regular lattice and path length similar to a random network. However, in practice, networks are typically defined as small-world by comparing clustering and path length to those of a comparable random network _(Humphries et al., 2006)_. Unfortunately, this means that networks with very low clustering can be, and indeed are, defined as small-world. We need a method that is able to distinguish true small-world networks from those that are more closely aligned with random or lattice structures and overestimates the occurrence of small-world networks. Networks that are more similar to random or lattice structures are interesting in their own right, but they do not behave like small-world networks\n", "\n", "## Identifying small-world networks\n", "\n", "Small-world networks are distinguished from other networks by two specific properties, the first being high clustering (C) among nodes. High clustering supports specialization as local collections of strongly interconnected nodes readily share information or resources. Conceptually, clustering is quite straightforward to comprehend. In a real-world analogy, clustering represents the probability that one’s friends are also friends of each other. Small-world networks also have short path lengths (L) as is commonly observed in random networks. Path length is a measure of the distance between nodes in the network, calculated as the mean of the shortest geodesic distances between all possible node pairs. Small values of $L$ ensure that information or resources easily spreads throughout the network. This property makes distributed information processing possible on technological networks and supports the six degrees of separation often reported in social networks.\n", "\n", "Watts and Strogatz developed a network model (WS model) that resulted in the first-ever networks with clustering close to that of a lattice and path lengths similar to those of random networks. The WS model demonstrates that random rewiring of a small percentage of the edges in a lattice results in a precipitous decrease in the path length, but only trivial reductions in the clustering. Across this rewiring probability, there is a range where the discrepancy between clustering and path length is very large, and it is in this area that the benefits of small-world networks are realized.\n", "\n", "### A first approach: the $\\sigma$ coefficient\n", "\n", "In 2006, Humphries and colleagues introduced a quantitative metric, small-world coefficient $\\sigma$, that uses a ratio of network clustering and path length compared to its random network equivalent. In this quantitative approach, $C$ and $L$ are measured against those of their equivalent derived random networks ($C_{rand}$ and $L_{rand}$, respectively) to generate the ratios $c = C/C_{rand}$ and $k = L/L_{rand}$. These ratios are then used to calculate the small-coefficient as:\n", "$$ \\sigma = \\frac{C/C_{rand}}{L/L_{rand}} = \\frac{\\gamma}{\\sigma} $$\n", "The conditions that must be met for a network to be classified as small-world are $C \\gg C_{rand}$ and $L \\approx L_{rand}$, which results in $\\sigma > 1$. As it turns out, a major issue with $\\sigma$ is that the clustering coefficient of the equivalent random network greatly influences the small-world coefficient. In the small-world coefficient equation, $\\sigma$ uses the relationship between $C$ and $C_{rand}$ to determine the value of $\\gamma$. Because clustering in a random network is typically extremely low (Humphries and Gurney, 2008; Watts and Strogatz, 1998) the value of $\\gamma$ can be unduly influenced by only small changes in $C_{rand}$. \n", "\n", "### A more solid approach: the $\\omega$ coefficient\n", "\n", "Given a graph with characteristic path length, $L$, and clustering, $C$, the small-world measurement, $\\omega$, is defined by comparing the clustering of the network to that of an equivalent lattice network, $C_latt$, and comparing path length to that of an equivalent random network, $L_rand$; the relationship\n", "is simply the difference of two ratios defined as:\n", "$$ \\omega = \\frac{L_{rand}}{L} - \\frac{C}{C_{latt}} $$\n", "In using the clustering of an equivalent lattice network rather than a random network, this metric is less susceptible to the fluctuations seen with $C_rand$. Moreover, values of $\\gamma$ are restricted to the interval $-1$ to $1$ regardless of network size. Values close to zero are considered small world.\n", "\n", "Positive values indicate a graph with more random characteristics, while negative values indicate a graph with more regular, or lattice-like, characteristics.\n", "\n", "#### Lattice network construction\n", "\n", "In the paper [1] the lattice network was generated by using a modified version of the ‘latticization’ algorithm (Sporns and Zwi,2004) found in the brain connectivity toolbox (Rubinov and Sporns, 2010). The procedure is based on a Markov-chain algorithm that maintains node degree and swaps edges with uniform probability; however, swaps are carried out only if the resulting matrix has entries that are closer to the main diagonal. To optimize the clustering coefficient of the lattice network, the latticization procedure is performed over several user-defined repetitions. Storing the initial adjacency matrix and its clustering coefficient, the latticization procedure is performed on the matrix. If the clustering coefficient of the resulting matrix is lower, the initial matrix is kept and latticization is performed again on the same matrix; if the clustering coefficient is higher, then the initial adjacency matrix is replaced. This latticization process is repeated until clustering is maximized. This process results in a highly clustered network with long path length approximating a lattice topology. To decrease the processing time in larger networks, a ‘sliding window’ procedure was developed. Smaller sections of the matrix are sampled along the main diagonal, latticized, and reinserted into the larger matrix in a step-wise fashion.\n", "\n", "#### Limitations\n", "\n", "The length of time it takes to generate lattice networks, particularly for large networks.Although\n", "latticization is fast in smaller networks, large networks such as functional brain networks and the Internet can take several\n", "hours to generate and optimize. The latticization procedure described here uses an algorithm developed by Sporns and\n", "Zwi in 2004, but the algorithm was used on much smaller datasets. \n", "\n", "Furthermore, $\\omega$ is limited by networks that have very low clustering that cannot be appreciably increased, such as networks with ‘‘super hubs’’ or hierarchical networks. In hierarchical networks, the nodes are often configured in branches\n", "that contain little to no clustering. In networks with ‘‘super hubs,’’ the network may contain a hub that has a node with\n", "a degree that is several times in magnitude greater than the next most connected hub. In both these networks, there are\n", "fewer configurations to increase the clustering of the network. Moreover, in a targeted assault of these networks, the topology is easily destroyed (Albert et al., 2000). Such vulnerability to attack signifies a network that may not be small-world." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3.10.8 64-bit", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.9" }, "orig_nbformat": 4, "vscode": { "interpreter": { "hash": "e7370f93d1d0cde622a1f8e1c04877d8463912d04d973331ad4851f04de6915a" } } }, "nbformat": 4, "nbformat_minor": 2 }