Raised This Month: $276 Target: $400

50$ for blocking A2S_Info attacks

Post New Thread Reply   
Thread Tools Display Modes
Join Date: Nov 2019
Old 09-07-2020 , 02:34   Re: 50$ for blocking A2S_Info attacks
Reply With Quote #11

Originally Posted by Maxximou5 View Post
Did you compile the extension with ambuild or the Makefile? If you're using Debian 10 or later you will need to make adjustments, the base extension when compiled will only work up to Debian 9.8.
hi Maxximou5, do you know about this error? i'm trying compile it on Ubuntu 18.04

Originally Posted by digin View Post
i got this error when trying compile it

PHP Code:
[4214c++ -pipe -fno-strict-aliasing -Wall -Werror -Wno-uninitialized -Wno-unused -Wno-switch -msse -m32 -fvisibility=hidden -mfpmath=sse -g3 -ggdb3 -std=c++11 -fvisibility-inlines-hidden -fno-exceptions -fno-rtti -fno-threadsafe-statics -Wno-non-virtual-dtor -Wno-overloaded-virtual -Wno-delete-non-virtual-dtor -Dstricmp=strcasecmp -D_stricmp=strcasecmp -D_snprintf=snprintf -D_vsnprintf=vsnprintf -DHAVE_STDINT_H -DGNUC -D_LINUX -DPOSIX -DMMS_GENERATED_BUILD -DSE_EPISODEONE=-DSE_HL2DM=-DSE_DOI=20 -DSE_DODS=-DSE_INSURGENCY=19 -DSE_LEFT4DEAD2=15 -DSE_NUCLEARDAWN=13 -DSE_ORANGEBOX=-DSE_BMS=10 -DSE_CSGO=21 -DSE_SDK2013=-DSE_LEFT4DEAD=12 -DSE_BLADE=18 -DSE_CSS=-DSE_TF2=11 -DSOURCE_ENGINE=12 -DCOMPILER_GCC -DNO_HOOK_MALLOC -DNO_MALLOC_OVERRIDE -/root/alliedmodders/mmsource-1.10/obj-linux/includes -/root/alliedmodders/mmsource-1.10/versionlib -/root/alliedmodders/mmsource-1.10/public -/root/alliedmodders/mmsource-1.10/query_cache -/root/alliedmodders/mmsource-1.10/query_cache/sourcehook -/root/alliedmodders/mmsource-1.10/loader -/root/alliedmodders/hl2sdk-l4d/public -/root/alliedmodders/hl2sdk-l4d/public/engine -/root/alliedmodders/hl2sdk-l4d/public/mathlib -/root/alliedmodders/hl2sdk-l4d/public/vstdlib -/root/alliedmodders/hl2sdk-l4d/public/tier0 -/root/alliedmodders/hl2sdk-l4d/public/tier1 -/root/alliedmodders/hl2sdk-l4d/public/game/server -/root/alliedmodders/hl2sdk-l4d/game/shared -/root/alliedmodders/hl2sdk-l4d/common -/root/alliedmodders/mmsource-1.10/core -/root/alliedmodders/mmsource-1.10/core/sourcehook --/root/alliedmodders/mmsource-1.10/query_cache/qcache_mm.cpp -o qcache_mm.o
In file included from 
from /root/alliedmodders/hl2sdk-l4d/public/tier1/convar.h:21,
from /root/alliedmodders/hl2sdk-l4d/public/eiface.h:16,
from /root/alliedmodders/mmsource-1.10/core/ISmmAPI.h:46,
from /root/alliedmodders/mmsource-1.10/core/ISmmPlugin.h:39,
from /root/alliedmodders/mmsource-1.10/query_cache/qcache_mm.h:18,
from /root/alliedmodders/mmsource-1.10/query_cache/qcache_mm.cpp:19:
root/alliedmodders/hl2sdk-l4d/public/tier1/utlmemory.hIn member function ‘void CUtlMemory<TI>::Swap(CUtlMemory<TI>&):
root/alliedmodders/hl2sdk-l4d/public/tier1/utlmemory.h:330:2errorthere are no arguments to ‘V_swap’ that depend on a template parameterso a declaration of ‘V_swap’ must be available [-fpermissive]
V_swapm_nGrowSizemem.m_nGrowSize );
root/alliedmodders/hl2sdk-l4d/public/tier1/utlmemory.h:330:2note: (if you use -fpermissive’G++ will accept your codebut allowing the use of an undeclared name is deprecated)
root/alliedmodders/hl2sdk-l4d/public/tier1/utlmemory.h:332:2errorthere are no arguments to ‘V_swap’ that depend on a template parameterso a declaration of ‘V_swap’ must be available [-fpermissive]
V_swapm_nAllocationCountmem.m_nAllocationCount );
Build failedquery_cache/qcache_mm.l4d/qcache_mm.
digin is offline
Join Date: Feb 2016
Old 09-13-2020 , 22:00   Re: 50$ for blocking A2S_Info attacks
Reply With Quote #12

If you need to compile an extension or sourcemod, I really recommend clang 3.8 version, it's what its used for releases for sourcemod/metamod. If you have ubuntu you can take the tar version of clang 3.8 from ubuntu 16.04 in order to compile, and those errors should get away.


Link doesn't seem to work though

You might wanting to temporary change your compiler by doing CC=pathtoclangbins/clang-3.8 & CXX=pathtoclangbins/clang-3.8


about this attack, there isn't much you can do, you can either direct those requests on a proxy (so like a program that actually simulates A2S_INFO and intercept these packets before it gets into the game), or get a ddos protection from ovh.
Without ddos protection, the problem will still remain inside the wire connection itself, which means that technically you can stop the packets from the kernel, your server will run fine but not your bandwidth depending how much packets they send.
I'm not an expert in networks, but that is why there is "ddos protection" which is mainly in fact big servers with a ton of bandwidth and a firewall, but it doesn't solve the problem much at all since it uses the ddos protection server instead of yours directly, which is mostly why ddos still exists nowadays I guess so..

Last edited by Xutax_Kamay; 09-13-2020 at 22:24.
Xutax_Kamay is offline
AlliedModders Donor
Join Date: Feb 2013
Old 09-13-2020 , 23:23   Re: 50$ for blocking A2S_Info attacks
Reply With Quote #13

Originally Posted by digin View Post
hi Maxximou5, do you know about this error? i'm trying compile it on Ubuntu 18.04
I compiled it with clang using AMBuild 2.2.
Maxximou5 is offline
Senior Member
Join Date: Jul 2015
Old 09-15-2020 , 09:57   Re: 50$ for blocking A2S_Info attacks
Reply With Quote #14

The solution of this problem is very simple I have solved it on my side because I have taken this attack many times before, if you still have problems when you do, you can add and get help via Steam.
my steam nick: 42global42
link: https://steamcommunity.com/id/LaRoVV66xD/

Fix #1: blocking an IP with iptables
sudo iptables --append INPUT --source --jump DROP
Modules warmup: the conntrack and log modules
$ sudo iptables --flush # start again
$ sudo iptables --append INPUT --protocol tcp --match conntrack --ctstate NEW --jump LOG --log-prefix "NEW TCP CONN: "
Fix #2: Rate limiting with the limit module
$ sudo iptables --flush # start again
$ sudo iptables --new-chain RATE-LIMIT
$ sudo iptables --append INPUT --match conntrack --ctstate NEW --jump RATE-LIMIT
Then in the RATE-LIMIT chain, create a rule which matches no more than 50 packets per second. These are the connections we’ll accept per second, so jump to ACCEPT. (I’ll explain --limit-burst later.)
$ sudo iptables --append RATE-LIMIT --match limit --limit 50/sec --limit-burst 20 --jump ACCEPT
The limit rule rate-limits packets by not matching them, so they fall through to the next rule. These packets we’ll drop:
sudo iptables --append RATE-LIMIT --jump DROP

The rate limit (above, 50/sec) is enforced as follows. Each limit rule has a bank account which stores “credits”. Credits are tokens to be spent on matching packets. The limit rule only earns credits one way: through its salary, which is one credit per “tick”. The above rule earns one credit every 20ms, thus restricting its spending to at most 50 matches per second. When a new packet comes in, if the rule has at least one credit, the credit is spent and the packet is matched. Otherwise, the packet falls through due to insufficient funds. Now, follow the diagram above: will a packet from be accepted or dropped?

This “credit” scheme allows “bursty” traffic. If 100 different users just happen to connect at the same time, we want to allow them all. The credit scheme’s tolerance of random fluctuation is desirable, but it has an undesirable side-effect. Note that, during the night when users are asleep, the rule could earn a huge amount of credits, which can then be spent to allow users to overload the system in the morning when they all open connections at once. We want to allow “bursty” traffic, but only up to a limit.

This is exactly what --limit-burst fixes. The burst limit is a cap on the number of credits that the rule can have in its account. The above rule is only allowed 20 credits. If the rule already has 20 credits when a tick happens, it doesn’t earn any more credits. This “use it or lose it” logic prevents enormous build-ups of credit, and thus prevents overloading the system. (The burst limit also happens to be the rule’s initial number of credits, so it doesn’t have to wait to build up credits.) See an example:

Another quiz! If a limit rule is configured as --limit 50/sec --limit-burst 20, and then receives 1 packet every millisecond over a period of 1 second, how many packets will be matched? 70. The first 20 packets will be accepted in the first 20ms, depleting the credit to 0. The credit will recharge by 1 every 20ms (1/50 seconds), allowing a single packet through each time it is recharged. This gives a total of 50.

Without the limit module, we were blocking all new connections. This was useless, and the limit module is a big improvement: we can now prevent our server from getting destroyed by huge numbers of new connections. But there is a fundamental limitation with limit in our multi-tenant system: it applies this rate limit globally. If a single client exceeds the limit, connections from clients who are using their fair share will be dropped.

One solution would be to match on a blacklist of source IP addresses. But this would require us to manually add new IPs to the tables (or implement our own system for doing this). Ideally we want to rate limit every source IP address separately. This is exactly what the hashlimit module is for.

Fix #3: Rate limiting per IP address with hashlimit
$ sudo iptables --flush # start again
$ sudo iptables --new-chain RATE-LIMIT
$ sudo iptables --append RATE-LIMIT \
--match hashlimit \
--hashlimit-upto 50/sec \
--hashlimit-burst 20 \
--hashlimit-name conn_rate_limit \
--jump ACCEPT
$ sudo iptables --append RATE-LIMIT --jump DROP
To instead limit per source IP, we need to tell hashlimit to group by source IP address. We do this with the --hashlimit-mode parameter, which defines how to group the packets. With --hashlimit-mode srcip, we create a group per source IP:
$ sudo iptables --append RATE-LIMIT \
--match hashlimit \
--hashlimit-mode srcip \
--hashlimit-upto 50/sec \
--hashlimit-burst 20 \
--hashlimit-name conn_rate_limit \
--jump ACCEPT
$ sudo iptables --append RATE-LIMIT --jump DROP

Follow the diagram. If a packet comes in on the connection from, what happens to the packet? The packet will be accepted. Since it’s an existing connection, it never jumps to the RATE-LIMIT table. Yes, it looks like has exhausted its hashlimit credits, but this doesn’t affect its existing connections!

Like conntrack, hashlimit tables have a maximum number of entries, and you should not let the table fill up. You can view the current hash table entries in /proc/net/ipt_hashlimit/conn_rate_limit. You should set --hashlimit-htable-max higher than the number of lines. You should also set --hashlimit-htable-size to max/4.3


We finally had the tools to rate limit new connections: conntrack and hashlimit. Here’s the satisfying moment when the rules were deployed:

Keeping an eye on your dropped connections

Dropping packets is risky because if an error is made in the rule, legitimate connections will be silently dropped. To keep an eye on your rate-limited connections, you have a couple of options. One is iptables --list --verbose, which shows the number of packets that have matched a rule; the count is under the pkts column. For more information, you can use the log module from earlier. Let’s add a new rule to log each dropped packet:

$ sudo iptables --append RATE-LIMIT \
--match hashlimit \
--hashlimit-mode srcip \
--hashlimit-upto 50/sec \
--hashlimit-burst 20 \
--hashlimit-name conn_rate_limit \
--jump ACCEPT
$ sudo iptables --append RATE-LIMIT --jump LOG --log-prefix "IPTables-Rejected: "
$ sudo iptables --append RATE-LIMIT --jump REJECT
You may find it excessive to log every single packet that is dropped; most likely it will be just as useful if we log a sample. How can we do this? Answer: the limit module again! Let’s see how we can do this:
$ sudo iptables --append RATE-LIMIT --match limit --limit 1/sec --jump LOG --log-prefix "IPTables-Rejected: "
This means only one dropped packet per second will be logged. I think this is a neat demonstration of how these simple and general modules can be composed in rules; we have used the limit module to achieve two things that are superficially very different: rate limit and logging!

Persisting iptables
iptables chains and rules are stored in-memory. If you reboot the machine they are lost. For that reason, iptables has a tool for saving and loading rule definitions. To save the current set of rules to a file, run

$ sudo iptables-save | tee rules.txt
:RATE-LIMIT - [0:0]
-A RATE-LIMIT --match hashlimit --hashlimit-upto 50/sec --hashlimit-burst 20 --hashlimit-mode srcip --hashlimit-name conn_rate_limit -j ACCEPT
-A RATE-LIMIT --match limit --limit 1/sec -j LOG --log-prefix "IPTables-Rejected: "
-A RATE-LIMIT -j REJECT --reject-with icmp-port-unreachable

I wish you a healthy day without attacks.
Best Regards

Last edited by LaRoVV66; 09-15-2020 at 09:59.
LaRoVV66 is offline

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

All times are GMT -4. The time now is 14:17.

Powered by vBulletin®
Copyright ©2000 - 2020, vBulletin Solutions, Inc.
Theme made by Freecode