The solution of this problem is very simple I have solved it on my side because I have taken this attack many times before, if you still have problems when you do, you can add and get help via Steam.
my steam nick: 42global42
link:
https://steamcommunity.com/id/LaRoVV66xD/
Fix #1: blocking an IP with iptables
Quote:
sudo iptables --append INPUT --source 123.123.123.123 --jump DROP
|
Modules warmup: the conntrack and log modules
Quote:
$ sudo iptables --flush # start again
$ sudo iptables --append INPUT --protocol tcp --match conntrack --ctstate NEW --jump LOG --log-prefix "NEW TCP CONN: "
|
Fix #2: Rate limiting with the limit module
Quote:
$ sudo iptables --flush # start again
$ sudo iptables --new-chain RATE-LIMIT
$ sudo iptables --append INPUT --match conntrack --ctstate NEW --jump RATE-LIMIT
|
Then in the RATE-LIMIT chain, create a rule which matches no more than 50 packets per second. These are the connections we’ll accept per second, so jump to ACCEPT. (I’ll explain --limit-burst later.)
Quote:
$ sudo iptables --append RATE-LIMIT --match limit --limit 50/sec --limit-burst 20 --jump ACCEPT
|
The limit rule rate-limits packets by not matching them, so they fall through to the next rule. These packets we’ll drop:
Quote:
sudo iptables --append RATE-LIMIT --jump DROP
|
The rate limit (above, 50/sec) is enforced as follows. Each limit rule has a bank account which stores “credits”. Credits are tokens to be spent on matching packets. The limit rule only earns credits one way: through its salary, which is one credit per “tick”. The above rule earns one credit every 20ms, thus restricting its spending to at most 50 matches per second. When a new packet comes in, if the rule has at least one credit, the credit is spent and the packet is matched. Otherwise, the packet falls through due to insufficient funds. Now, follow the diagram above: will a packet from 5.5.5.5 be accepted or dropped?
This “credit” scheme allows “bursty” traffic. If 100 different users just happen to connect at the same time, we want to allow them all. The credit scheme’s tolerance of random fluctuation is desirable, but it has an undesirable side-effect. Note that, during the night when users are asleep, the rule could earn a huge amount of credits, which can then be spent to allow users to overload the system in the morning when they all open connections at once. We want to allow “bursty” traffic, but only up to a limit.
This is exactly what --limit-burst fixes. The burst limit is a cap on the number of credits that the rule can have in its account. The above rule is only allowed 20 credits. If the rule already has 20 credits when a tick happens, it doesn’t earn any more credits. This “use it or lose it” logic prevents enormous build-ups of credit, and thus prevents overloading the system. (The burst limit also happens to be the rule’s initial number of credits, so it doesn’t have to wait to build up credits.) See an example:
Another quiz! If a limit rule is configured as --limit 50/sec --limit-burst 20, and then receives 1 packet every millisecond over a period of 1 second, how many packets will be matched? 70. The first 20 packets will be accepted in the first 20ms, depleting the credit to 0. The credit will recharge by 1 every 20ms (1/50 seconds), allowing a single packet through each time it is recharged. This gives a total of 50.
Without the limit module, we were blocking all new connections. This was useless, and the limit module is a big improvement: we can now prevent our server from getting destroyed by huge numbers of new connections. But there is a fundamental limitation with limit in our multi-tenant system: it applies this rate limit globally. If a single client exceeds the limit, connections from clients who are using their fair share will be dropped.
One solution would be to match on a blacklist of source IP addresses. But this would require us to manually add new IPs to the tables (or implement our own system for doing this). Ideally we want to rate limit every source IP address separately. This is exactly what the hashlimit module is for.
Fix #3: Rate limiting per IP address with hashlimit
Quote:
$ sudo iptables --flush # start again
$ sudo iptables --new-chain RATE-LIMIT
$ sudo iptables --append RATE-LIMIT \
--match hashlimit \
--hashlimit-upto 50/sec \
--hashlimit-burst 20 \
--hashlimit-name conn_rate_limit \
--jump ACCEPT
$ sudo iptables --append RATE-LIMIT --jump DROP
|
To instead limit per source IP, we need to tell hashlimit to group by source IP address. We do this with the --hashlimit-mode parameter, which defines how to group the packets. With --hashlimit-mode srcip, we create a group per source IP:
Quote:
$ sudo iptables --append RATE-LIMIT \
--match hashlimit \
--hashlimit-mode srcip \
--hashlimit-upto 50/sec \
--hashlimit-burst 20 \
--hashlimit-name conn_rate_limit \
--jump ACCEPT
$ sudo iptables --append RATE-LIMIT --jump DROP
|
Follow the diagram. If a packet comes in on the connection from 1.2.3.4
456, what happens to the packet? The packet will be accepted. Since it’s an existing connection, it never jumps to the RATE-LIMIT table. Yes, it looks like 1.2.3.4 has exhausted its hashlimit credits, but this doesn’t affect its existing connections!
Like conntrack, hashlimit tables have a maximum number of entries, and you should not let the table fill up. You can view the current hash table entries in /proc/net/ipt_hashlimit/conn_rate_limit. You should set --hashlimit-htable-max higher than the number of lines. You should also set --hashlimit-htable-size to max/4.3
Success!
We finally had the tools to rate limit new connections: conntrack and hashlimit. Here’s the satisfying moment when the rules were deployed:
Keeping an eye on your dropped connections
Dropping packets is risky because if an error is made in the rule, legitimate connections will be silently dropped. To keep an eye on your rate-limited connections, you have a couple of options. One is iptables --list --verbose, which shows the number of packets that have matched a rule; the count is under the pkts column. For more information, you can use the log module from earlier. Let’s add a new rule to log each dropped packet:
Quote:
$ sudo iptables --append RATE-LIMIT \
--match hashlimit \
--hashlimit-mode srcip \
--hashlimit-upto 50/sec \
--hashlimit-burst 20 \
--hashlimit-name conn_rate_limit \
--jump ACCEPT
$ sudo iptables --append RATE-LIMIT --jump LOG --log-prefix "IPTables-Rejected: "
$ sudo iptables --append RATE-LIMIT --jump REJECT
|
You may find it excessive to log every single packet that is dropped; most likely it will be just as useful if we log a sample. How can we do this? Answer: the limit module again! Let’s see how we can do this:
Quote:
$ sudo iptables --append RATE-LIMIT --match limit --limit 1/sec --jump LOG --log-prefix "IPTables-Rejected: "
|
This means only one dropped packet per second will be logged. I think this is a neat demonstration of how these simple and general modules can be composed in rules; we have used the limit module to achieve two things that are superficially very different: rate limit and logging!
Persisting iptables
iptables chains and rules are stored in-memory. If you reboot the machine they are lost. For that reason, iptables has a tool for saving and loading rule definitions. To save the current set of rules to a file, run
Quote:
$ sudo iptables-save | tee rules.txt
*filter
:INPUT ACCEPT [66398]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [45516]
:RATE-LIMIT - [0:0]
-A RATE-LIMIT --match hashlimit --hashlimit-upto 50/sec --hashlimit-burst 20 --hashlimit-mode srcip --hashlimit-name conn_rate_limit -j ACCEPT
-A RATE-LIMIT --match limit --limit 1/sec -j LOG --log-prefix "IPTables-Rejected: "
-A RATE-LIMIT -j REJECT --reject-with icmp-port-unreachable
COMMIT
|
I wish you a healthy day without attacks.
Best Regards