blog

why doesn’t dijkstra work with negative weights

The Hofstadter’s Diagonal Argument is a mathematical proof that tries to show why Dijkstra’s algorithm doesn’t work properly with negative weights. It’s actually quite simple, if you know what the problem is.

One thing to mention before we start, is that the algorithm only works with weights of either 0 (no weight) or infinity. It doesn’t work properly when you have negative numbers because it considers a path of length -infinity to be worse than one of length +0. We’ll come back to this later in another blog post about Dijkstra’s Algorithm and see how we can solve this problem.

Now let’s go over Hofstadter’s Diagonal Argument: imagine our graph has an infinite number of nodes on it, all numbered from zero up until `n`. In order for us to find the best possible route between two points (x_i and x_{i+ax}), Dijkstra takes the following approach:

We first find all the nodes in our graph that are numbered less than x_i and add them to a list (let’s call this list `L`). We then sort L by their distance from x_{i+ax} like so:

The cost of travelling between two points is calculated as follows:

we take the length or weight of the edge leading to node i – in this case, it would be `w(x_{i})`,

and we add up its value with every other node. If there was an infinite number of edges leading into any given node, then they get infinity for theirs. The only exception is if you have a negative weight on your line; when using a negative weight, infinity is used instead.

We then take the sum of all those values and divide it by the total weight of edges that lead into node i – in this case, `w(x_{i})+sum(e)`. The result for a given x_i will be called `cost (x_{i}).` We keep doing this with every other point until we have exhausted our list L.

The problem arises when there are two or more points on an edge: how do you know which one to choose? You can’t really tell who’s closer or further away because their distance from each other is infinitely small! In fact, as long as they’re not on opposite ends of your line, it doesn’t matter which one you choose.

We then take the sum of all those values and divide it by the total weight of edges that lead into node i – in this case `w(x_{i})+sum(e)`. The result for a given x_i will be called `cost (x_{i}).` We keep doing this with every other point until we have exhausted our list L.

This means that we can never be sure of the weight for a given edge until we know what point is on both ends of the line, and in order to do so, we have to go through all our edges one by one first. We’re not just wasting time – this will also create an infinitely large matrix because there are infinite ways two points can interact with each other. If we try to solve this problem using Dijkstra’s algorithm (which looks at every node in sequence), then it would take us forever! But don’t worry; I’m going to show you how linear programming solves this issue nicely..

The “cost” value for any point x_i is defined as the sum of its distances to all other points minus the lengths of any paths from x_i to those points. We say that x_{j} is k steps away when it has a distance less than or equal to k, and we have no idea how many steps are required for any point in between since there might be an arbitrary number of them.

Let’s take our example matrix again with these weights:

A B C D E F G

0 0 -100 300 100 200 400 600 500 100 700 .. This means that A costs nothing, but everything else costs at least one unit. So if you wanted to find out what node (point) cost more than $500 using this summation method so far, you would have to sum up every row in the matrix and then take a look at which node has the highest number.

This is an example of what is known as Dijkstra’s algorithm, and while it does work well with positive weights (meaning that you can tell very quickly how much something costs), it doesn’t really scale when you start stacking on negative values because now your cost value just keeps getting smaller without any bounds or end point. For instance if we had this new set of weighting:

We could use our summation method so far but now we have no way of knowing if the cost we’re looking at is negative or positive.

So what do you do? Well, there’s a way to solve this problem and that’s by using something called “delta-X.” Delta X can be thought of as an equation which tells us how much time has elapsed between two different nodes in our matrix (or graph) – for instance: delta_x(node.a, node.b) would tell us how many hours have passed from when node A happened until node B happens. So let’s go back to our weighted graph above: -100 300 100 200 400 600 500 0 700 ..

0 0 700 100 900 1100 1300 1900 1800 800 200 .. If we plug in those values and use the equation delta_x(node.a, node.b) then we can get a value to represent how long it takes for an edge from A to B to be traversed:

-100 300 100 200 400 600 500 0 700 .. Delta X (A->B)= -1900 1800 800 200

0 0 700 100 900 1100 1300 1900 1800 800 200 .. So now when you have unweighted edges and weighted edges where one weight might be negative that doesn’t matter because all of them will add up as positive numbers and so they’ll cancel each other out! This is called “Hofstadter’s Diagonal Argument.”

This post has been flagged by our moderators with three flags or more.

As explained in the video, when you have unweighted edges and weighted edges where one weight might be negative that doesn’t matter because all of them will add up as positive numbers and so they’ll cancel each other out. This is called “Hofstadter’s Diagonal Argument.”

We can see a graph for this argument in slide 12 on page 31 from Hofstadter’s book:

[[Image(page31)]] The vertical axis represents time, which goes down to zero at both extremes; while the horizontal axis measures distance between nodes or vertices. We use -w_eighht (or delta x) on our edge equation with w representing weighted links. For example if we plug-in 500 for distances and -100 for weights, then the equation looks like this: e^(-w_weight*dist) In Hofstadter’s Diagonal Argument, to get an idea of what happens when x is negative or positive we graph e raised to some power versus delta x. When x=0 it stays at a horizontal line and as we increase the weight w (or decrease distance dist), our diagonal argument starts leaning more and more until finally all links are pointing downward with no slope left. This doesn’t happen if you add in weighted links where one link might be negative because they’ll just cancel out each other so that your final answer will always be zero. If there were only unweighted edges on our diagram representing people who

Radhe Gupta

Radhe Gupta is an Indian business blogger. He believes that Content and Social Media Marketing are the strongest forms of marketing nowadays. Radhe also tries different gadgets every now and then to give their reviews online. You can connect with him...

Recent Posts

Exploring the Rise of Recreational Cannabis Dispensaries

In recent years, the legalization of recreational cannabis has sparked a surge in the establishment…

3 hours ago

The Art of Appreciation: Creating Unforgettable Retirement Moments

As human beings, we tend to celebrate all our major milestones, from our first jobs…

2 weeks ago

Stardew Valley: A Digital Retreat for the Elderly

In recent years, video games have started attracting more than just youngsters. A game called…

4 months ago

7 Things To Expect From Your Term Insurance Plan – A Comprehensive Guide 2024

Introduction Safeguarding the financial future of your loved ones is a responsibility that demands careful…

4 months ago

Business Applications of AI Email Assistant AImReply

How AImReply’s AI Can Improve Your Business Email communication is a critical part of any…

4 months ago

Uncover the Hidden Advantages of Online Suboxone Clinics

In recent years, online suboxone clinics have emerged as a convenient and effective treatment option…

4 months ago

This website uses cookies.