In this sprint, you will learn about graphs. These data structures are part of the critical computer science fundamentals that interviewers expect you to be comfortable with during the technical interview process.
Additionally, we will teach you about the technical interview process, best practices for interviews, getting unstuck, and overcoming imposter syndrome during an interview. As you complete this sprint and the computer science curriculum, you should feel prepared to tackle the technical interview process successfully.
In this module, you will learn about graph properties, graph representations, and how to build Graph and Vertex classes to construct your graphs in Python.
In this module, you will learn about breadth-first search, depth-first search, and implementing both of these techniques in code.
Depth-first search (DFS) is an algorithm for searching a graph or tree data structure. The algorithm starts at the root (top) node of a tree and goes as far as it can down a given branch (path), then backtracks until it finds an unexplored path, and then explores it. The algorithm does this until the entire graph has been explored. Many problems in computer science can be thought of in terms of graphs. For example, analyzing networks, mapping routes, scheduling, and finding spanning trees are graph problems. To analyze these problems, graph-search algorithms like depth-first search are useful.
Depth-first searches are often used as subroutines in other more complex algorithms. For example, the matching algorithm, Hopcroft–Karp, uses a DFS as part of its algorithm to help to find a matching in a graph. DFS is also used in tree-traversal algorithms, also known as tree searches, which have applications in the traveling-salesman problem and the Ford-Fulkerson algorithm.
How do you solve a maze?
Depth-first search is a common way that many people naturally approach solving problems like mazes. First, we select a path in the maze (for the sake of the example, let's choose a path according to some rule we lay out ahead of time) and we follow it until we hit a dead end or reach the finishing point of the maze. If a given path doesn’t work, we backtrack and take an alternative path from a past junction, and try that path. Below is an animation of a DFS approach to solving this maze.
DFS is a great way to solve mazes and other puzzles that have a single solution.
Contents
The main strategy of depth-first search is to explore deeper into the graph whenever possible. Depth-first search explores edges that come out of the most recently discovered vertex, ss. Only edges going to unexplored vertices are explored. When all of ss’s edges have been explored, the search backtracks until it reaches an unexplored neighbor. This process continues until all of the vertices that are reachable from the original source vertex are discovered. If there are any unvisited vertices, depth-first search selects one of them as a new source and repeats the search from that vertex. The algorithm repeats this entire process until it has discovered every vertex. This algorithm is careful not to repeat vertices, so each vertex is explored once. DFS uses a stack data structure to keep track of vertices.
Here are the basic steps for performing a depth-first search:
Visit a vertex ss.
Mark ss as visited.
Recursively visit each unvisited vertex attached to ss.
This animation illustrates the depth-first search algorithm:
Note: This animation does not show the marking of a node as "visited," which would more clearly illustrate the backtracking step.
Fill out the following graph by labeling each node 1 through 12 according to the order in which the depth-first search would visit the nodes:
Show solution
Below are examples of pseudocode and Python code implementing DFS both recursively and non-recursively. This algorithm generally uses a stack in order to keep track of visited nodes, as the last node seen is the next one to be visited and the rest are stored to be visited later.
Pseudocode[1]
Python Implementation without Recursion
DFS can also be implemented using recursion, which greatly reduces the number of lines of code.
Python Implementation Using Recursion
It is common to modify the algorithm in order to keep track of the edges instead of the vertices, as each edge describes the nodes at each end. This is useful when one is attempting to reconstruct the traversed tree after processing each node. In case of a forest or a group of trees, this algorithm can be expanded to include an outer loop that iterates over all trees in order to process every single node.
There are three different strategies for implementing DFS: pre-order, in-order, and post-order.
Pre-order DFS works by visiting the current node and successively moving to the left until a leaf is reached, visiting each node on the way there. Once there are no more children on the left of a node, the children on the right are visited. This is the most standard DFS algorithm.
Instead of visiting each node as it traverses down a tree, an in-order algorithm finds the leftmost node in the tree, visits that node, and subsequently visits the parent of that node. It then goes to the child on the right and finds the next leftmost node in the tree to visit.
A post-order strategy works by visiting the leftmost leaf in the tree, then going up to the parent and down the second leftmost leaf in the same branch, and so on until the parent is the last node to be visited within a branch. This type of algorithm prioritizes the processing of leaves before roots in case a goal lies at the end of a tree.
Depth-first search visits every vertex once and checks every edge in the graph once. Therefore, DFS complexity is O(V + E)O(V+E). This assumes that the graph is represented as an adjacency list.
DFS vs BFS
Breadth-first search is less space-efficient than depth-first search because BFS keeps a priority queue of the entire frontier while DFS maintains a few pointers at each level.
If it is known that an answer will likely be found far into a tree, DFS is a better option than BFS. BFS is good to use when the depth of the tree can vary or if a single answer is needed—for example, the shortest path in a tree. If the entire tree should be traversed, DFS is a better option.
BFS always returns an optimal answer, but this is not guaranteed for DFS.
Here is an example that compares the order that the graph is searched in when using a BFS and then a DFS (by each of the three approaches).[2]
Breadth First Search : 1 2 3 4 5
Depth First Search
Pre-order: 1 2 4 5 3
In-order : 4 2 5 1 3
Post-order : 4 5 2 3 1
Depth-first search is used in topological sorting, scheduling problems, cycle detection in graphs, and solving puzzles with only one solution, such as a maze or a sudoku puzzle.
Other applications involve analyzing networks, for example, testing if a graph is bipartite. Depth-first search is often used as a subroutine in network flow algorithms such as the Ford-Fulkerson algorithm.
DFS is also used as a subroutine in matching algorithms in graph theory such as the Hopcroft–Karp algorithm.
Depth-first searches are used in mapping routes, scheduling, and finding spanning trees.
source:wikipedia
Website Version
Curriculum
Utilities
practice
Resources
quick-reference
Python-Docs
MISC
Interview Prep
Installations Setup & Env
Aux-Exploration
One method we can use when searching a graph is a breadth-first search (BFS). This sorting algorithm explores the graph outward in rings of increasing distance from the starting vertex.
The algorithm never attempts to explore a vert it has already explored or is currently exploring.
For example, when starting from the upper left, the numbers on this graph show a vertex visitation order in a BFS:
We followed the edges represented with thicker black arrows. We did not follow the edges represented with thinner grey arrows because we already visited their destination nodes.
The exact order will vary depending on which branches get taken first and which vertex is the starting vertex.
Note: it's essential to know the distinction between a breadth-first search and a breadth-first traversal. A breadth-first traversal is when you visit each vertex in the breadth-first order and do something during the traversal. A breadth-first search is when you search through vertexes in the breadth-first order until you find the target vertex. A breadth-first search usually returns the shortest path from the starting vertex to the target vertex once the target is found.
Pathfinding, Routing
Find neighbor nodes in a P2P network like BitTorrent
Web crawlers
Finding people n
connections away on a social network
Find neighboring locations on the graph
Broadcasting in a network
Cycle detection in a graph
Solving several theoretical graph problems
As we explore the graph, it is useful to color verts as we arrive at them and as we leave them behind as "already searched".
Unvisited verts are white, verts whose neighbors are being explored are gray, and verts with no unexplored neighbors are black.
In a BFS, it's useful to track which nodes we still need to explore. For example, in the diagram above, when we get to node 2, we know that we also need to explore nodes 3 and 4.
We can track that by adding neighbors to a queue (which remember is first in, first out), and then explore the verts in the queue one by one.
Let's explore some pseudo-code that shows a basic implementation of a breadth-first-search of a graph. Make sure you can read the pseudo-code and understand what each line is doing before moving on.
You can see that we start with a graph and the vertex we will start on. The very first thing we do is go through each of the vertices in the graph and mark them with the color white. At the outset, we mark all the verts as unvisited.
Next, we mark the starting vert as gray. We are exploring the starting verts’ neighbors. We also enqueue the starting vert, which means it will be the first vert we look at once we enter the while loop.
The condition we check at the outset of each while loop is if the queue is not empty. If it is not empty, we peek at the first item in the queue by storing it in a variable.
Then, we loop through each of that vert's neighbors and:
We check if it is unvisited (the color white).
If it is unvisited, we mark it as gray (meaning we will explore its neighbors).
We enqueue the vert.
Next, we dequeue the current vert we've been exploring and mark that vert as black (marking it as visited).
We continue with this process until we have explored all the verts in the graph.
On your own, complete the following tasks:
Please spend a few minutes researching to find a unique use-case of a breadth-first-search that we did not mention in the list above.
Using the graph represented below, draw a picture of the graph and label each of the verts to show the correct vertex visitation order for a breadth-first-search starting with vertex "I"
.
Besides marking verts with colors as in the pseudo-code example above, how else could you track the verts we have already visited?
Another method we can use when searching a graph is a depth-first search (DFS). This searching algorithm "dives" "down" the graph as far as it can before backtracking and exploring another branch.
The algorithm never attempts to explore a vert it has already explored or is exploring.
For example, when starting from the upper left, the numbers on this graph show a vertex visitation order in a DFS:
We followed the edges represented with thicker black arrows. We did not follow the edges represented with thinner grey arrows because we already visited their destination nodes.
The exact order will vary depending on which branches get taken first and which vertex is the starting vertex.
DFS is often the preferred method or exploring a graph if we want to ensure we visit every node in the graph. For example, let's say that we have a graph representing all the friendships in the entire world. We want to find a path between two known people, Andy
and Sarah
. If we used a depth-first search in this scenario, we could end up exceptionally far away from Andy
while still not finding a path to Sarah
. Using a DFS, we will eventually find the path, but it won't find the shortest route, and it will also likely take a long time.
So, this is an example of where a DFS would not work well. What about a genuine use case for DFS. Here are a few examples:
Finding Minimum Spanning Trees (Links to an external site.) of weighted graphs
Pathfinding
Detecting cycles in graphs
Topological sorting (Links to an external site.), useful for scheduling sequences of dependent jobs
Solving and generating mazes
Again, as we explore the graph, it is useful to color verts as we arrive at them and as we leave them behind as "already searched".
Unvisited verts are white, verts whose neighbors are being explored are gray, and verts with no unexplored neighbors are black.
Since DFS will pursue leads in the graph as far as it can, and then "back up" to an earlier branch point to explore that way, recursion is an excellent approach to help "remember" where we left off.
Looking at it with pseudo-code to make the recursion more clear:
Let's explore some pseudo-code that shows a basic implementation of a depth-first-search of a graph. Make sure you can read the pseudo-code and understand what each line is doing before moving on.
You can see that we have two functions in our pseudo-code above. The first function, DFS()
takes the graph as a parameter and marks all the verts as unvisited (white). It also sets the parent of each vert to null
. The next loop in this function visits each vert in the graph, and if it is unvisited, it passes that vert into our second function DFS_visit()
.
DFS_visit()
starts by marking the vert as gray (in the process of being explored). Then, it loops through all of its unvisited neighbors. In that loop, it sets the parent and then makes a recursive call to the DFS_visit()
. Once it's done exploring all the neighbors, it marks the vert as black (visited).
On your own, complete the following tasks:
Please spend a few minutes researching to find a unique use-case of a depth-first search that we did not mention in the list above.
Using the graph represented below, draw a picture of the graph and label each of the verts to show the correct vertex visitation order for a depth-first-search starting with vertex "I"
.
Now that we've looked at and understand the basics of a breadth-first search (BFS) on a graph, let's implement a BFS algorithm.
Before defining our breadth-first search method, review our Vertex
and Graph
classes that we defined previously.
Now, we will add a breadth_first_search
method to our Graph
class. One of the most common and simplest ways to implement a BFS is to use a queue to keep track of unvisited nodes and a set to keep track of visited nodes. Let's start by defining the start of our function with these structures:
What is time complexity in Big O notation of a breadth-first search on a graph with V
vertices and E
edges?
Which method will find the shortest path between a starting point and any other reachable node? A breadth-first search or a depth-first search?
The depth-first search algorithm on a graph starts at an arbitrary vertex in the graph and explores as far as possible down each branch before backtracking. So, you start at the starting vertex, mark it as visited, and then move to an adjacent unvisited vertex. You continue this loop until every reachable vertex is visited.
Before defining our depth-first search method, review our Vertex
and Graph
classes that we defined previously.
Now, we will add a depth_first_search
method to our Graph
class. One of the most common and simplest ways to implement a DFS is to use a set to keep track of visited vertices and use recursion to manage the visitation order. Let's now define our function:
Does a depth-first search reliably find the shortest path?
If you didn't want to use recursion, what data structure could you use to write an iterative depth-first search?
****
Graphs are collections of related data. They’re like trees, except connections can be made from any node to any other node, even forming loops. By this definition, all trees are graphs, but not all graphs are trees.
We call the nodes in a graph vertexes (or vertices or verts), and we call the connections between the verts edges.
An edge denotes a relationship or linkage between the two verts.
Graphs can represent any multi-way relational data.
A graph could show a collection of cities and their linking roads.
It could show a collection of computers on a network.
It could represent trade relationships between nations.
It could represent the money owed in an ongoing poker night amongst friends.
And so on.
Directed and Undirected Graphs
The nature of the relationship that we represent determines if we should use a directed or undirected graph. If we could describe the relationship as "one way", then a directed graph makes the most sense. For example, representing the owing of money to others (debt) with a directed graph would make sense.
Directed graphs can also be bidirectional. For example, road maps are directed since all roads are one-way; however, most streets consist of lanes in both directions.
If the relationship's nature is a mutual exchange, then an undirected graph makes the most sense. For example, we could use an undirected graph to represent users who have exchanged money in the past. Since an "exchange" relationship is always mutual, an undirected graph makes the most sense here.
Cyclic and Acyclic Graphs
If you can form a cycle (for example, follow the edges and arrive again at an already-visited vert), the graph is cyclic. For instance, in the image below, you can start at B and then follow the edges to C, E, D, and then back to B (which you’ve already visited).
Note: any undirected graph is automatically cyclic since you can always travel back across the same edge.
If you cannot form a cycle (for example, you cannot arrive at an already-visited vert by following the edges), we call the graph acyclic. In the example below, no matter which vert you start at, you cannot follow edges in such a way that you can arrive at an already-visited vert.
Weighted Graphs
Weighted graphs have values associated with the edges. We call the specific values assigned to each edge weights.
The weights represent different data in different graphs. In a graph representing road segments, the weights might represent the length of the road. The higher the total weight of a route on the graph, the longer the trip is. The weights can help decide which particular path we should choose when comparing all routes.
We can further modify weights. For example, if you were building a graph representing a map for bicycle routes, we could give roads with bad car traffic or very steep inclines unnaturally large weights. That way, a routing algorithm would be unlikely to take them. (This is how Google Maps avoids freeways when you ask it for walking directions.)
Directed Acyclic Graphs (DAGs)
A directed acyclic graph (DAG) is a directed graph with no cycles. In other words, we can order a DAG’s vertices linearly in such a way that every edge is directed from earlier to later in the sequence.
A DAG has several applications. DAGs can model many different kinds of information. Below is a small list of possible applications:
A spreadsheet where a vertex represents each cell and an edge for where one cell's formula uses another cell's value.
The milestones and activities of largescale projects where a topological ordering can help optimize the projects' schedule to use as little time as possible.
Collections of events and their influence on each other like family trees or version histories.
It is also notable that git uses a DAG to represent commits. A commit can have a child commit, or more than one child commit (in a branch). A child could come from one parent commit or two (in the case of a merge). But there’s no way to go back and form a repeating loop in the git commit hierarchy.
Before you draw graphs on your own, let's draw some graphs together. For each graph, we will have a description.
Draw an undirected graph of 8 verts.
Remember, from our definitions above that an undirected graph has bidirectional edges. So, we can draw eight verts and then connect them with solid lines (not arrows) anyway we see fit.
Draw a directed graph of 7 verts.
A directed graph has at least one edge that is not bidirectional. So, again, we can draw our seven verts and then connect them with edges. This time, we need to make sure that one of the edges is an arrow pointing in only one direction.
Draw a cyclic directed graph of 5 verts.
This drawing will be similar to one for Exercise 2 because it is a directed graph. However, in this graph, we also need to ensure that it has at least one cycle. Remember that a cycle is when you can follow the graph's edges and arrive at a vertex that you've already visited.
To draw this graph, we will draw our five verts and then draw our edges, making sure that we create at least one cycle.
Draw a directed acyclic graph (DAG) of 8 verts.
Again, this graph will be directed. The difference is that it will be acyclic—we can order a DAG’s vertices linearly so that every edge is directed from earlier to later in the sequence.
For this graph, we will draw our eight verts in a line from left to right. We will then draw our edges, making sure that the edges always point from left to right (earlier to later in the sequence).
Draw one graph for each of the descriptions below:
Undirected graph of 4 verts.
Directed graph of 5 verts.
Cyclic directed graph of 6 verts.
DAG of 7 verts.
Two common ways to represent graphs in code are adjacency lists and adjacency matrices. Both of these options have strengths and weaknesses. When deciding on a graph implementation, it's essential to understand what type of data you will store and what operations you need to run on the graph.
Below is an example of how we would represent a graph with an adjacency matrix and an adjacency list. Notice how we represent the relationship between verts C and D when using each type.
Adjacency List
In an adjacency list, the graph stores a list of vertices. For each vertex, it holds a list of each connected vertex.
Below is a representation of the graph above in Python:
Notice that this adjacency list doesn't use any lists. The vertices
collection is a dictionary
which lets us access each collection of edges in O(1) constant time. Because a set
contains the edges, we can check for edges in O(1) constant time.
Adjacency Matrix
Here is the representation of the graph above in an adjacency matrix:
We represent this matrix as a two-dimensional array–a list of lists. With this implementation, we get the benefit of built-in edge weights. 0
denotes no relationship, but any other value represents an edge label or edge weight. The drawback is that we do not have a built-in association between the vertex values and their index.
In practice, implementing both the adjacency list and adjacency matrix would contain more information by including Vertex
and Edge
classes.
Adjacency matrices and adjacency lists have strengths and weaknesses. Let's explore their tradeoffs by comparing their attributes and the efficiency of operations.
In all the following examples, we are using the following shorthand to denote the graph's properties:
Space Complexity
Adjacency Matrix
Complexity: O(V^2)
space
Consider a dense graph where each vertex points to each other vertex. Here, the total number of edges will approach V^2. This fact means that regardless of whether you choose an adjacency list or an adjacency matrix, both will have a comparable space complexity. However, dictionaries and sets are less space-efficient than lists. So, for dense graphs (graphs with a high average number of edges per vertex), the adjacency matrix is more efficient because it uses lists instead of dictionaries and sets.
Adjacency List
Complexity: O(V+E)
space
Consider a sparse graph with 100 vertices and only one edge. An adjacency list would have to store all 100 vertices but only needs to keep track of that single edge. The adjacency matrix would need to store 100x100=10,000 connections, even though all but one would be 0.
Takeaway: The worst-case storage of an adjacency list occurs when the graph is dense. The matrix and list representation have the same complexity (O(V^2)
). However, for the general case, the list representation is usually more desirable. Also, since finding a vertex's neighbors is a common task, and adjacency lists make this operation more straightforward, it is most common to use adjacency lists to represent graphs.
Add Vertex
Adjacency Matrix
Complexity: O(V)
time
For an adjacency matrix, we would need to add a new value to the end of each existing row and add a new row.
for v in self.edges: self.edges[v].append(0) v.append([0] * len(self.edges + 1))
Remember that with Python lists, appending to the end of a list is O(1)
because of over-allocation of memory but can be O(n)
when the over-allocated memory fills up. When this occurs, adding the vertex can be O(V^2)
.
Adjacency List
Complexity: O(1)
time
Adding a vertex is simple in an adjacency list:
Adding a new key to a dictionary is a constant-time operation.
Takeaway: Adding vertices is very inefficient for adjacency matrices but very efficient for adjacency lists.
Remove Vertex
Adjacency Matrix
Complexity: O(V^2)
Removing vertices is inefficient in both representations. In an adjacency matrix, we need to remove the removed vertex's row and then remove that column from each row. Removing an element from a list requires moving everything after that element over by one slot, which takes an average of V/2
operations. Since we need to do that for every single row in our matrix, that results in V^2
time complexity. We need to reduce each vertex index after our removed index by one as well, which doesn't add to our quadratic time complexity but adds extra operations.
Adjacency List
Complexity: O(V)
We need to visit each vertex for an adjacency list and remove all edges pointing to our removed vertex. Removing elements from sets and dictionaries is an O(1)
operation, resulting in an overall O(V)
time complexity.
Takeaway: Removing vertices is inefficient in both adjacency matrices and lists but more efficient in lists.
Add Edge
Adjacency Matrix
Complexity: O(1)
Adding an edge in an adjacency matrix is simple:
Adjacency List
Complexity: O(1)
Adding an edge in an adjacency list is simple:
Both are constant-time operations.
Takeaway: Adding edges to both adjacency matrices and lists is very efficient.
Remove Edge
Adjacency Matrix
Complexity: O(1)
Removing an edge from an adjacency matrix is simple:
Adjacency List
Complexity: O(1)
Removing an edge from an adjacency list is simple:
Both are constant-time operations.
Takeaway: Removing edges from both adjacency matrices and lists is very efficient.
Find Edge
Adjacency Matrix
Complexity: O(1)
Finding an edge in an adjacency matrix is simple:
Adjacency List
Complexity: O(1)
Finding an edge in an adjacency list is simple:
Both are constant-time operations.
Takeaway: Finding edges in both adjacency matrices and lists is very efficient.
Get All Edges from Vertex
You can use several commands if you want to know all the edges originating from a particular vertex.
Adjacency Matrix
Complexity: O(V)
In an adjacency matrix, this is complicated. You would need to iterate through the entire row and populate a list based on the results:
Adjacency List
Complexity: O(1)
With an adjacency list, this is as simple as returning the value from the vertex dictionary:
Takeaway: Fetching all edges is less efficient in an adjacency matrix than an adjacency list.
Summary
Let's summarize all this complexity information in a table:
In most practical use-cases, an adjacency list will be a better choice for representing a graph. However, it is also crucial that you be familiar with the matrix representation. Why? Because there are some dense graphs or weighted graphs that could have better space efficiency when represented by a matrix.
Together, we will now use the graph shown in the picture above and represent it in both an adjacency list and an adjacency matrix.
First, the adjacency list:
The difference between this implementation and the previous adjacency list is that this representation allows our edges to have weights.
Now, we need to implement an adjacency matrix. Remember, that one benefit of the matrix is how easy it is to represent edge weights:
Using the graph shown in the picture above, write python code to represent the graph in an adjacency list.
Using the same graph you used for the first exercise, write python code to represent the graph in an adjacency matrix.
Write a paragraph that compares and contrasts the efficiency of your different representations.
We will now use dictionaries to implement the graph abstract data type in Python. We need to have two classes. First, the Graph
class that will keep track of the vertices in the graph instance. Second, the Vertex
class, which we will use to represent each vertex contained in a graph. Both classes will have methods that allow you to complete the basic operations for interacting with graphs and vertices.
Vertex
ClassLet's start by defining a Vertex
class and defining its initialization method (__init__
) and its __str__
method so we can print out a human-readable string representations of each vertex:
The next thing we need for our Vertex
class is a way to other vertices that are connected and the weight
of the connection edge. We will call this method add_connection
.
Let's now add three methods that allow us to get data out of our Vertex
instance objects. These three methods will be get_connections
(retrieves all currently connected vertices), get_value
(retrieves the value of the vertex instance), and get_weight
(gets the edge weight from the vertex to a specified connected vertex).
We've finished our Vertex
class. Now, let's work on our Graph
class.
Graph
ClassOur graph class's primary purpose is to be a way that we can map vertex names to specific vertex objects. We also want to keep track of the number of vertices that our graph contains using a count
property. We will do so using a dictionary. Let's start by defining an initialization method (__init__
).
Next, we need a way to add vertices to our graph. Let's define an add_vertex
method.
We also need a way to add an edge to our graph. We need a method that can create a connection between two vertices and specify the edge's weight. Let's do so by defined an add_edge
method.
Next, we need a way to retrieve a list of all the vertices in our graph. We will define a method called get_vertices
.
Last, we will override a few built-in methods (__contains__
and __iter__
) that are available on objects to make sure they work correctly with Graph
instance objects.
Let's go ahead and test our class definitions and build up a graph structure in a Python interactive environment.
Load the Vertex
class and Graph
class into an interactive Python environment and use the classes to create an instance of the graph shown below.
It could show a population of people who know each other and .
Note: is a graph search variant that accounts for edge weights.
Shorthand | Property |
V | Total number of vertices in the graph |
E | Total number of edges in the graph |
e | Average number of edges per vertex |
type | Space | Add Vert | Remove Vert | Add Edge | Remove Edge | Find Edge | Get All Edges |
Matrix | O(V^2) | O(V) | O(V^2) | O(1) | O(1) | O(1) | O(V) |
List | O(V+E) | O(1) | O(V) | O(1) | O(1) | O(1) | O(1) |