package lib
- Alphabetic
- Public
- Protected
Type Members
- class AggregateMessages extends Arguments with Serializable with WithIntermediateStorageLevel with Logging
This is a primitive for implementing graph algorithms.
This is a primitive for implementing graph algorithms. This method aggregates messages from the neighboring edges and vertices of each vertex.
For each triplet (source vertex, edge, destination vertex) in GraphFrame.triplets, this can send a message to the source and/or destination vertices.
AggregateMessages.sendToSrc()sends a message to the source vertex of each tripletAggregateMessages.sendToDst()sends a message to the destination vertex of each tripletAggregateMessages.aggspecifies an aggregation function for aggregating the messages sent to each vertex. It also runs the aggregation, computing a DataFrame with one row for each vertex which receives > 0 messages. The DataFrame has 2 columns:- vertex column ID (named GraphFrame.ID)
- aggregate from messages sent to vertex (with the name given to the
Columnspecified inAggregateMessages.agg())
When specifying the messages and aggregation function, the user may reference columns using:
- AggregateMessages.src: column for source vertex of edge
- AggregateMessages.edge: column for edge
- AggregateMessages.dst: column for destination vertex of edge
- AggregateMessages.msg: message sent to vertex (for aggregation function)
Note: If you use this operation to write an iterative algorithm, you may want to use
checkpoint()(localCheckpoint()) as a workaround for caching issues.We can use this function to compute the in-degree of each vertex
val g: GraphFrame = Graph.textFile("twittergraph") val inDeg: DataFrame = g.aggregateMessages().sendToDst(lit(1)).agg(sum(AggregateMessagesBuilder.msg))
Example: - class BFS extends Arguments with Serializable
Breadth-first search (BFS)
Breadth-first search (BFS)
This method returns a DataFrame of valid shortest paths from vertices matching
fromExprto vertices matchingtoExpr. If multiple paths are valid and have the same length, the DataFrame will return one Row for each path. If no paths are valid, the DataFrame will be empty. Note: "Shortest" means globally shortest path. I.e., if the shortest path between two vertices matchingfromExprandtoExpris length 5 (edges) but no path is shorter than 5, then all paths returned by BFS will have length 5.The returned DataFrame will have the following columns:
fromstart vertex of pathe[i]edge i in the path, indexed from 0v[i]intermediate vertex i in the path, indexed from 1toend vertex of path Each of these columns is a StructType whose fields are the same as the columns of GraphFrame.vertices or GraphFrame.edges.
For example, suppose we have a graph g. Say the vertices DataFrame of g has columns "id" and "job", and the edges DataFrame of g has columns "src", "dst", and "relation".
// Search from vertex "Joe" to find the closet vertices with attribute job = CEO. g.bfs(col("id") === "Joe", col("job") === "CEO").run()
If we found a path of 3 edges, each row would have columns:
from | e0 | v1 | e1 | v2 | e2 | to
In the above row, each vertex column (from, v1, v2, to) would have fields "id" and "job" (just like g.vertices). Each edge column (e0, e1, e2) would have fields "src", "dst", and "relation".
If there are ties, then each of the equal paths will be returned as a separate Row.
If one or more vertices match both the from and to conditions, then there is a 0-hop path. The returned DataFrame will have the "from" and "to" columns (as above); however, the "from" and "to" columns will be exactly the same. There will be one row for each vertex in GraphFrame.vertices matching both
fromExprandtoExpr.Parameters:
fromExprSpark SQL expression specifying valid starting vertices for the BFS. This condition will be matched against each vertex's id or attributes. To start from a specific vertex, this could be "id = [start vertex id]". To start from multiple valid vertices, this can operate on vertex attributes.toExprSpark SQL expression specifying valid target vertices for the BFS. This condition will be matched against each vertex's id or attributes.maxPathLengthLimit on the length of paths. If no valid paths of length <= maxPathLength are found, then the BFS is terminated. (default = 10)edgeFilterSpark SQL expression specifying edges which may be used in the search. This allows the user to disallow crossing certain edges. Such filters can be applied post-hoc after BFS, run specifying the filter here is more efficient.
Returns:
- DataFrame of valid shortest paths found in the BFS
- class ConnectedComponents extends Arguments with Logging with WithAlgorithmChoice with WithCheckpointInterval with WithBroadcastThreshold with WithIntermediateStorageLevel with WithUseLabelsAsComponents with WithMaxIter with WithLocalCheckpoints
Connected Components algorithm.
Connected Components algorithm.
Computes the connected component membership of each vertex and returns a DataFrame of vertex information with each vertex assigned a component ID.
The resulting DataFrame contains all the vertex information and one additional column:
- component (
LongType): unique ID for this component
- component (
- class DetectingCycles extends Arguments with Serializable with Logging with WithIntermediateStorageLevel with WithLocalCheckpoints with WithCheckpointInterval
- class KCore extends Serializable with WithIntermediateStorageLevel with WithCheckpointInterval with WithLocalCheckpoints
K-Core decomposition algorithm implementation for GraphFrames.
K-Core decomposition algorithm implementation for GraphFrames.
This object provides the
runmethod to compute the k-core decomposition of a graph, which assigns each vertex the maximum k such that the vertex is part of a k-core. A k-core is a maximal connected subgraph in which every vertex has degree at least k.The algorithm is based on the distributed k-core decomposition approach described in:
Mandal, Aritra, and Mohammad Al Hasan. "A distributed k-core decomposition algorithm on spark." 2017 IEEE International Conference on Big Data (Big Data). IEEE, 2017.
- class LabelPropagation extends Arguments with WithAlgorithmChoice with WithCheckpointInterval with WithMaxIter with WithLocalCheckpoints with WithIntermediateStorageLevel with Logging
Run static Label Propagation for detecting communities in networks.
Run static Label Propagation for detecting communities in networks.
Each node in the network is initially assigned to its own community. At every iteration, nodes send their community affiliation to all neighbors and update their state to the mode community affiliation of incoming messages.
LPA is a standard community detection algorithm for graphs. It is very inexpensive computationally, although (1) convergence is not guaranteed and (2) one can end up with trivial solutions (all nodes are identified into a single community).
The resulting DataFrame contains all the original vertex information and one additional column:
- label (
LongType): label of community affiliation
- label (
- class MaximalIndependentSet extends Serializable with WithIntermediateStorageLevel with WithCheckpointInterval with WithLocalCheckpoints
This class implements a distributed algorithm for finding a Maximal Independent Set (MIS) in a graph.
This class implements a distributed algorithm for finding a Maximal Independent Set (MIS) in a graph.
An MIS is a set of vertices such that no two vertices in the set are adjacent (i.e., there is no edge between any two vertices in the set), and the set is maximal, meaning that adding any other vertex to the set would violate the independence property. Note that this implementation finds a maximal (but not necessarily maximum) independent set; that is, it ensures no more vertices can be added to the set, but does not guarantee that the set has the largest possible number of vertices among all possible independent sets in the graph.
The algorithm implemented here is based on the paper: Ghaffari, Mohsen. "An improved distributed algorithm for maximal independent set." Proceedings of the twenty-seventh annual ACM-SIAM symposium on Discrete algorithms. Society for Industrial and Applied Mathematics, 2016.
Note: This is a randomized, non-deterministic algorithm. The result may vary between runs even if a fixed random seed is provided because how Apache Spark works.
- class PageRank extends Arguments with Logging
PageRank algorithm implementation.
PageRank algorithm implementation. There are two implementations of PageRank.
The first one uses the
org.apache.spark.graphx.graphinterface withaggregateMessagesand runs PageRank for a fixed number of iterations. This can be executed by settingmaxIter. Conceptually, the algorithm does the following:var PR = Array.fill(n)( 1.0 ) val oldPR = Array.fill(n)( 1.0 ) for( iter <- 0 until maxIter ) { swap(oldPR, PR) for( i <- 0 until n ) { PR[i] = alpha + (1 - alpha) * inNbrs[i].map(j => oldPR[j] / outDeg[j]).sum } }
The second implementation uses the
org.apache.spark.graphx.Pregelinterface and runs PageRank until convergence and this can be run by settingtol. Conceptually, the algorithm does the following:var PR = Array.fill(n)( 1.0 ) val oldPR = Array.fill(n)( 0.0 ) while( max(abs(PR - oldPr)) > tol ) { swap(oldPR, PR) for( i <- 0 until n if abs(PR[i] - oldPR[i]) > tol ) { PR[i] = alpha + (1 - \alpha) * inNbrs[i].map(j => oldPR[j] / outDeg[j]).sum } }
alphais the random reset probability (typically 0.15),inNbrs[i]is the set of neighbors which link toiandoutDeg[j]is the out degree of vertexj.Note that this is not the "normalized" PageRank and as a consequence pages that have no inlinks will have a PageRank of alpha. In particular, the pageranks may have some values greater than 1.
The resulting vertices DataFrame contains one additional column:
- pagerank (
DoubleType): the pagerank of this vertex
The resulting edges DataFrame contains one additional column:
- weight (
DoubleType): the normalized weight of this edge after running PageRank
- pagerank (
- class ParallelPersonalizedPageRank extends Arguments with WithMaxIter with Logging
Parallel Personalized PageRank algorithm implementation.
Parallel Personalized PageRank algorithm implementation.
This implementation uses the standalone GraphFrame interface and runs personalized PageRank in parallel for a fixed number of iterations. This can be run by setting
maxIter. The source vertex Ids are set insourceIds. A simple local implementation of this algorithm is as follows.var oldPR = Array.fill(n)( 1.0 ) val PR = (0 until n).map(i => if sourceIds.contains(i) alpha else 0.0) for( iter <- 0 until maxIter ) { swap(oldPR, PR) for( i <- 0 until n ) { PR[i] = (1 - alpha) * inNbrs[i].map(j => oldPR[j] / outDeg[j]).sum if (sourceIds.contains(i)) PR[i] += alpha } }
alphais the random reset probability (typically 0.15),inNbrs[i]is the set of neighbors which link toiandoutDeg[j]is the out degree of vertexj.Note that this is not the "normalized" PageRank and as a consequence pages that have no inlinks will have a PageRank of alpha. In particular, the pageranks may have some values greater than 1.
The resulting vertices DataFrame contains one additional column:
- pageranks (
VectorType): the pageranks of this vertex from all input source vertices
The resulting edges DataFrame contains one additional column:
- weight (
DoubleType): the normalized weight of this edge after running PageRank
- pageranks (
- class Pregel extends Logging with WithLocalCheckpoints with WithIntermediateStorageLevel
Implements a Pregel-like bulk-synchronous message-passing API based on DataFrame operations.
Implements a Pregel-like bulk-synchronous message-passing API based on DataFrame operations.
See Malewicz et al., Pregel: a system for large-scale graph processing for a detailed description of the Pregel algorithm.
You can construct a Pregel instance using either this constructor or org.graphframes.GraphFrame#pregel, then use builder pattern to describe the operations, and then call run to start a run. It returns a DataFrame of vertices from the last iteration.
When a run starts, it expands the vertices DataFrame using column expressions defined by withVertexColumn. Those additional vertex properties can be changed during Pregel iterations. In each Pregel iteration, there are three phases:
- Given each edge triplet, generate messages and specify target vertices to send, described by sendMsgToDst and sendMsgToSrc.
- Aggregate messages by target vertex IDs, described by aggMsgs.
- Update additional vertex properties based on aggregated messages and states from previous iteration, described by withVertexColumn.
Please find what columns you can reference at each phase in the method API docs.
You can control the number of iterations by setMaxIter and check API docs for advanced controls.
Example code for Page Rank:
val edges = ... val vertices = GraphFrame.fromEdges(edges).outDegrees.cache() val numVertices = vertices.count() val graph = GraphFrame(vertices, edges) val alpha = 0.15 val ranks = graph.pregel .withVertexColumn("rank", lit(1.0 / numVertices), coalesce(Pregel.msg, lit(0.0)) * (1.0 - alpha) + alpha / numVertices) .sendMsgToDst(Pregel.src("rank") / Pregel.src("outDegree")) .aggMsgs(sum(Pregel.msg)) .run()
- class SVDPlusPlus extends Arguments with WithMaxIter with Logging
Implement SVD++ based on "Factorization Meets the Neighborhood: a Multifaceted Collaborative Filtering Model", available at https://dl.acm.org/citation.cfm?id=1401944.
Implement SVD++ based on "Factorization Meets the Neighborhood: a Multifaceted Collaborative Filtering Model", available at https://dl.acm.org/citation.cfm?id=1401944.
Note: The status of this algorithm is EXPERIMENTAL. Its API and implementation may be changed in the future.
The prediction rule is rui = u + bu + bi + qi*(pu + |N(u)|-0.5*sum(y)). See the details on page 6 of the article.
Configuration parameters: see the description of each parameter in the article.
Returns a DataFrame with vertex attributes containing the trained model. See the object (static) members for the names of the output columns.
- class ShortestPaths extends Arguments with WithAlgorithmChoice with WithCheckpointInterval with WithLocalCheckpoints with WithIntermediateStorageLevel with WithDirection
Computes shortest paths from every vertex to the given set of landmark vertices.
Computes shortest paths from every vertex to the given set of landmark vertices. Note that this takes edge direction into account.
The returned DataFrame contains all the original vertex information as well as one additional column:
- distances (
MapType[vertex ID type, IntegerType]): For each vertex v, a map containing the shortest-path distance to each reachable landmark vertex.
- distances (
- class StronglyConnectedComponents extends Arguments with WithMaxIter with Logging
Compute the strongly connected component (SCC) of each vertex and return a DataFrame with each vertex assigned to the SCC containing that vertex.
Compute the strongly connected component (SCC) of each vertex and return a DataFrame with each vertex assigned to the SCC containing that vertex.
The resulting DataFrame contains all the original vertex information and one additional column:
- component (
LongType): unique ID for this component
- component (
- class TriangleCount extends Arguments with Serializable with WithIntermediateStorageLevel
Computes the number of triangles passing through each vertex.
Computes the number of triangles passing through each vertex.
This algorithm ignores edge direction; i.e., all edges are treated as undirected. In a multigraph, duplicate edges will be counted only once.
**WARNING** This implementation is based on intersections of neighbor sets, which requires collecting both SRC and DST neighbors per edge! This will blow up memory in case the graph contains very high-degree nodes (power-law networks). Consider sampling strategies for that case!
The returned DataFrame contains all the original vertex information and one additional column:
- count (
LongType): the count of triangles
- count (
Value Members
- object AggregateMessages extends Logging with Serializable
- object ConnectedComponents extends Logging
- object DetectingCycles extends Serializable
- object KCore extends Serializable with Logging
- object MaximalIndependentSet extends Serializable with Logging
- object Pregel extends Serializable
Constants and utilities for the Pregel algorithm.
- object SVDPlusPlus