"Greedy" algorithms
Greedy algorithms are algorithms that, at every step, make a locally optimal decision without worrying about what will happen next. They are not always correct, but there are tasks where greedy algorithms work correctly.
An example of a greedy algorithm is as follows. Remember the "Paid Ladder" task from the DP contest. The correct solution in this problem is precisely the dynamics, but in this problem you can also come up with the following greedy solution (true, wrong). At every step we have two options — to climb the next step or jump over the step. Let's see which of these two options is cheaper, i.e. which of these steps has a lower price, and we will make such a step.
Of course, this solution is wrong, here is an example. If the following numbers are written on the steps:
1 2 10 2
then the greedy algorithm will see that initially it has two options: go to the step with the number 1 or with the number 2 — and go to the step with the number 1, because it is cheaper. But the right solution here is to go to the step with the number 2, because then we will be able to jump the step with the number 10.
This example clearly shows why greedy algorithms usually don't work. They do not take into account the long-term consequences of their actions, they make a choice that is optimal only taking into account the immediate prospects.
(I'll note right away that I often want to use greedy algorithms in DP tasks. Yes, many tasks are similar to greed and DP, it's just that in a greedy algorithm you prove that there are no options to consider, and in DP you honestly consider them. Therefore, if greed does not work, then think about whether it will be possible to come up with a DP here. But in fact, there are also many tasks for greed, where DP is not particularly imaginable, and many tasks for greed that do not look like DP at all.)
But there are problems in which greed still works, in which it is possible to prove that the greedy algorithm is correct. In fact, in the simplest tasks, the correctness of greed is obvious (and at this level you will mostly have tasks of this type, more advanced ones will be at level 6B), in some tasks, the correctness of greed may not be obvious (or there may even be several different greedy algorithms that you can come up with, and it is not clear which one is correct), but you can write greed, send it for verification (if possible at a particular Olympiad) and immediately find out whether it is correct or not. Finally, even if greed is incorrect, it often works in simple cases, so greedy algorithms are often well suited for the role of partial solutions.
How to prove greed?
How do greedy algorithms usually prove? In fact, at the current level, you don't have to learn how to prove them, but if you understand what is written below, it will be good.
There are two approaches to proving greed problems. The first option is more general. It can be applied in those tasks where you need to make several consecutive steps, several consecutive choices. (In the example of the problem about the paid ladder above, you are making several consecutive choices "which step to go to".) You need to prove that if you make a locally optimal choice, it will not cancel the opportunity to come to a globally optimal solution. Usually the proof goes like this: let's take a solution constructed by a greedy algorithm, take the optimal solution, find the first step where they differ, and prove that the optimal solution can be changed so that it remains optimal, but this different step began to coincide with the greedy solution. Then we have an optimal solution that coincides with the greedy one step further. Then it is obvious that there is an optimal solution that completely coincides with the greedy one, i.e. that the greedy one is optimal.
Example. Let's say we have a problem: there are $N$ things, each with its own weight. It is necessary to choose as many things as possible so that the total weight does not exceed a given number of $C$. The obvious greedy solution is to take things, starting with the lightest, until the total weight does not exceed $C$. As soon as I surpassed — that's it, we output the answer.
Let's prove it. Consider a greedy solution, it takes things in ascending order of weight. Let's consider the optimal solution and consider the first step when we deviated from the optimal one in the greedy solution. This means that in the greedy solution we have taken a thing (let it be a thing $X$) that is not included in the optimal solution. So, there must be some thing in the optimal solution (let it be a $Y$ thing) that is not in the greedy one, otherwise there would be more things in the greedy solution than in the optimal one, which contradicts optimality. At the same time, the thing $Y$ is not lighter than the thing $X$, because in the greedy solution we took all the things in ascending order of weight. Then let's take the optimal solution, and replace the thing $Y$ in it with the thing $X$. The total weight of things in the optimal solution will not increase, the number of things will not decrease, so the solution will still be optimal. But it will coincide with the greedy one a step further. CHTD.
The second version of the proof is suitable for those tasks where you need to choose a certain order of objects: a set of objects is given to you, but you need to choose in which order to arrange them in order to optimize something. Then you can try to prove that the items should be arranged in ascending order of some parameter (this will be a greedy algorithm). The proof will be as follows: let the objects in the optimal solution not go in this order. Then we will find two adjacent items that go in the wrong order, and swap them, and prove that the solution will not worsen, which means it will remain optimal. Then it is obvious that the greedy solution (which arranges items in ascending order of this parameter) will be correct.
Example. In ACM-type Olympiads, participants solve problems. For each solved problem, they receive a penalty equal to the time elapsed from the beginning of the tour to the moment of solving this problem. Suppose we have an ideal team, and it spends $t_i$ minutes solving the $i$th problem (and never makes unsuccessful attempts). In what order do they need to solve the tasks in order to get the minimum penalty?
Greedy algorithm: in ascending order $t_i$. Proof. Suppose we have an optimal solution in which $t_i$ are not sorted in ascending order. Let's find two problems, $i$ and $j$, such that in the optimal solution we solve the problem $i$ first, and immediately after it the problem $j$, while $t_i>=t_j$. Let's swap them. What will change in terms of penalty time? For all the tasks that we solved before these two, the penalty time will not change. For all the tasks that we solved after these two, the penalty time will not change either. (That's why we took neighboring tasks.) The penalty time for these tasks was $t_i+(t_i+t_j)$, and it became $t_j+(t_i+t_j)$. Since $t_i>=t_j$, the solution has not deteriorated, so it remains optimal. CHTD.
(Both proofs above can in principle be reformulated into proofs from the opposite: that if the optimal solution is very different from the greedy one, then we change the optimal solution, we get a solution that is strictly better, which means that the optimal solution was not optimal. Yes, it is also possible to prove this, but we must carefully deal with the case of equal values — the case of $t_i=t_j$ or the case of two things of the same weight in the first problem.)
In fact, the second version of the proof actually allows you to invent greed in those tasks where it is not obvious (if you do not understand this paragraph, then it's not scary). If you need to arrange objects in some order in the task, and you don't know in which order, think: let you have some order. Let's swap two adjacent objects, see how the solution changes. Let the estimate of the old solution be $X$ and the new one be $Y$ (this is, of course, a solution function). Let's write the condition $X>Y$, i.e. that the solution has improved. Let's try to transform it so as to reduce everything to the characteristics of the two objects that we are swapping. Then maybe we will find that the condition $X>Y$ is equivalent to the condition $f(i)>f(j)$, where $i$ and $j$ are the items that we swapped, and $f$ is some kind of function. Then it is obvious that in the right solution, you just need to sort the items by the value of the function $f$.