Crack Leetcode 121. Best Time to Buy and Sell Stock

There is a way to make it easy.

Christina Lalay
The Startup

--

Sometimes they make it look hard, while in fact, there’s always a way you can easily understand. Today, we’ll crack leetcode 121— Best time to buy and sell stock — together. Without further ado, let’s get started.

Problem Description:

(photo taken from Leetcode: https://leetcode.com/problems/best-time-to-buy-and-sell-stock/)

This problem is actually asking, what’s the maximum profit you can expect, when you know all the stock prices in this time period.

Solution:

Solution #1. Dynamic Programming:

I didn’t come up this, but I thought it’s worth sharing, so I listed it here as first solution:

(photo taken from leetcode discussion: https://leetcode.com/problems/best-time-to-buy-and-sell-stock/discuss/923225/two-different-kinds-of-dynamic-programming-python)

#1st code:

The code marked as “Kadane’s algorithm” above, I believe this picture would make it easier to understand:

As is said in the picture, “local” means the vector sum of profit up to today. If it hit negative value, “local” will reset. And “Total” is the max of what we can get comparing yesterday and today.

From my understanding, it’s essentially another way of writing for the straight-forward solution, and the code can be translated into this:

which might be easier to understand. It means: we check the new stock price every day, and calculate how much profit we can get if we sell out today. If it’s larger than our previous calculated profit, then we update Total. Otherwise if today’s price dropped below our buy-in price, then we buy in today (so today’s profit is counted as 0).

This is what it means of this line: local = max(0, local+prices[i+1]-prices[i])

#2nd code:

While in the #ordinary dynamic programming, this line is worth analyzing:

profit[i]=max(profit[i-1], prices[i]-minimum)

it’s essentially using a list to store the maximum profit expectation for each day. So let’s say today is day[i], then to get max profit expectation for today, we need to compare today’s profit with yesterday’s max profit expectation day[i-1].

Complexity Analysis for the two:

Time Complexity:

For-loop for i takes (N-1) time, and within each i, we have constant number of operations to do, therefore the time complexity is O(N).

Space Complexity:

The #Kandan’s algorithm used several variables, and they don’t increase space with N growing. Therefore the Space complexity is O(1).

The #Ordinary Dynamic programming method used a list of N, and several variables. Therefore the space complexity is O(N).

Solution #2: Straight-forward solution:

Suppose we buy in the stock on day1. Then we check the new stock price every day, and calculate how much profit we can get if we sell out today.

Then we could update the possible profit once a larger profit is found.

Meanwhile, if the stock price today is lower than our buy-in price, then we start another round by buying in today.

Complexity Analysis for solution #2:

Time Complexity:

For-loop for i takes (N-1) time, and within each i, we have 3 comparisons to do, so in total we need O(N) operations.

Therefore the time complexity is O(N).

Space Complexity:

The algorithm used several variables, and they don’t increase space with N growing. Therefore the Space complexity is O(1).

# Optimization:

if the stock price is not changing or going upward, (1)we don’t want to sell now, (2) we don’t need to consider buying in.

we only consider selling out or buying in when the stock price goes down.

Based on this, i modified the algorithm a little bit, skipping the going upward values, which is the same technique used in 3Sum problem(leetcode 15. In 3sum problem, it’s used to skip the duplicates.)

Complexity Analysis for optimized solution #2:

Time Complexity:

Time complexity stays the same, O(N), as it still need one pass. It would be faster in execution time though.

Space Complexity:

for the above same reason, the variables used doesn’t increase with N increasing. Therefore, the Space complexity = O(1)

The END:

Thanks for reading! Please check our articles more often, if you’d like to support our work — and we’ll see you in the next one!

--

--