8 January 2002

These respond to "On ISPs Not Filtering Viruses:" http://cryptome.org/isp-nofilter.htm

It is the Code Red worm which affected Cryptome's machines, which use Unix OS (Linux on the previous machine and Sun Solaris on the two now).


From: An ISP Sysadmin.

On Sun, 6 Jan 2002, John Young wrote:

> We've also had bad luck in getting our ISP, Verio, to filter
> viruses. The sysadmins we've discussed it with provide
> varying explanations why this is not possible.

First, let me specifically note that the topic here appears to be the "Code Red" type of self-propagating code.  This is very important, as my discussion here is specifically not applicable to other types of malicious code or events.

Assuming that we are dealing only with the Code Red type of "worm", it is in fact possible, on a technical level (and has been discussed in great detail by every ISP/NSP of which I am personally aware) to filter in a manner which would halt any spread.

Necessary Background:

The traditional model is for an ISP/NSP to provide end-to-end connectivity, period.  Under this model, the interception and pre-processing of any packet is an absolute no-no.  Further, there may be legal liability for carriers: liability if you claim to intercept and "miss", and liability if your interception succeeds, but against the wrong packets (thereby interrupting the guaranteed data flow you have sold).

Because of the extreme E&O ("Errors and Omissions") risk in agreeing to filter on a subscribers behalf, this type of thing must be priced on a completely separate model.

This value-added model often goes under the name of "Managed Security Services" or something similar.

Assuming you have not purchased some type of provider managed security service, there is no financial incentive, and in fact there may be significant financial exposure, in the provider making any attempt to filter this type of activity.  Nevertheless, there are some conditions under which such filtering is beginning to be seriously considered by the larger ISPs/NSPs (I am specifically referring here to very large national or multi-national backbone providers here, a group of about a dozen vendors).

In the "traditional" core (or "backbone"), the provider had a bunch of really big routers connecting the various points-of-presence ("POP"s).  As packets came in from subscribers, they would be gradually passed up the router food-chain until eventually they would hit a "core" router, which would send it on to the destination POP.  Under this model, Code Red type activity is a CPU intensive nuisance to the provider, but certainly not any kind of critical emergency.

The "modern" core consists of rather more intelligent devices.  These devices are, in the final analysis, just a form of "big router", however, they are designed much differently internally.  These modern routers will often combine routing, firewalls, possibly VPN functionality, etc, and implement it in a way that it can all be done where the backbone is, rather than where the subscriber is (as with the "traditional" model).

These "provider provisioned networks" operate on a small (less than a dozen) number of platforms.  And all of these platforms have an incredible amount of processing power (some of these boxes can have dozens of processors) - far more power than could ever be used by "normal" subscribers.

Enter Code Red:

Code Red and it's decendants (hereafter, "CRs") operate by first infecting a machine, and then using that machine to break into other machines.  This is not, in and of itself, new.  What is new to CRs is the methodology used: CRs are armed with a large "arsenal" of possible attacks. As an infected machine attempts to break into it's neighbors, it tries each one of the attacks it knows about, until it either succeeds, or runs out of attacks (yes, for those technical folk out there, this is a gross simplification - deal with it).

It is the CRs ability to use multiple attacks that sets it apart.  It is also the property that makes CRs just as dangerous to the providers as it is to the subscribers, although for different reasons. 

As CRs go through their list of attacks, going after one machine after another, they eat up phenomenal amounts of bandwidth.  Even worse, is this bandwidth usage is not just capacity, but it is also "connections" ("flows" for the more technical).  Each of these new attacks creates at least one, and sometimes several new connections from the infected machine to the machine under attack.  And since growth patterns for the infections themselves tend to be heavily logarithmic rather than linear, the network-wide number of connetions climbs into almost unthinkable numbers very quickly.

The traditional core router will groan under this kind of load, but that's about all.  it's not a very smart device - it just takes packets from machine "A", looks to see where they are headed, and send them on their way.

The "modern" core device though is a lot different: it will attempt to keep track of each one of the "connections" being made.  It does this by necessity - only by knowing the complete "state" of every connection it touches can it do it's job of providing all of these new "provider services". And no matter how much processing power the device has, eventually it is going to run out of resources under these conditions - no designer has ever built a network device with CRs in mind (although they are now starting).

The Internal Discussions At ISP/NSPs:

Because of the tremendous load placed on modern core equipment by CRs, every ISP/NSP that uses it has at least begun looking at the feasability of filtering CRs before they can hit the core, and possibly bring it, or parts of it, down.  Unfortuantely, this discussion brings with it all of the attendant issues that filtering the "traditional" core has, namely, "Errors and Ommissions" liability.  As this E&O risk approaches the risk of losing the network infrastrucure, the discussions will no doubt move out of the theoretical and into the practical.

The ways to prevent CRs from proagating - and thereby making it to the core - are based completely on content-based filtering.  It is necessary for some device (hopefully the subsriber router0 to look into each and every packet which is generated by a subscriber, and then decide whether or not the packet "looks legit'.  In and of itself, this is not a big deal for most low-end (but not "consumer-grade") edge routers.  What is a big deal is being able to keep up.  Edge routers already have problems keeping up with "normal" traffic patterns, if those traffic patterns contain lots of small (under ~384 bytes) packets.  Adding the content-inspection functionality will require a lot more horsepower be put into subscriber equipment, which will of course raise the price rather significantly.

Currently, those edge devices which are [at least in theory] capable of content based filtering are not doing so.  To turn this functionality on is no small task either, when a large ISP/NSP may have many tens of thousands of these devices.  Assuming automation is even possible (which it probably is not, due solely to administrative errors made years ago, and propagated down the ages out of sheer corporate stasis), it will still require that each and every subscriber be re-contracted to allow for the necessary changes to be made.  Furthermore, the sources of CRs are not limited to only the local (provider) network - the public internet will be a lingering source, even if every one of the "major" providers were to eradicate their "internal" sources of infection and propagation. 

> Instead they
> suggest workarounds to send the known varmints to null
> or to a phony file name or even a file to collect them and
> then be emptied periodically.

This is, for the time being, a reasonable position for the average user :-(

> What is peculiar is that the sysadmins do not tell the same
> story, instead offer vague explanations when pressed.

Realize that (a) many of the sysadmin staff really don't have a good grasp on this issue (sysadmin staff tend to be 2nd/3rd tier in abilities [all sysadmin flames to /dev/null please]), and (b) the issues here are as much administrative as they are technical, and those staff that do have a good handle on this are very much caught in the middle, so mumbling nonsense is a kind of defensive response.

> When we said we wanted to purchase (rent) new space on
> alternative machines, we were told that would not solve
> the problem. That even erasing the disks on our current
> machine, and reinstalling system programs and our files
> would offer only momentary relief for the viruses would
> return. The gist of all tales was that I would have to live
> with the problem.

True _if_ you are handling your own box.  If this is a managed box, then you are being taken for a ride - a managed box should be _managed_, and part of that management is taking protective steps.  Mind you, these steps will not _always_ work, but they should work the vast majority of the time.  And when they don't, it should be your provider's problem to fix, and not yours.

> However, when I decided in frustration to switch to another
> type of Verio service, a Verio rep told me to not believe what
> the sysadmins were saying, that the problem is not technical
> but administrative. However, he would not provide detail on
> what the administrative problem is. He promised the new
> services he was offering would take care of the virus problem.

As pointed out above...

> So we rented two new Verio machines to replace a single one
> hosting our two sites, and split the archive to fit the two
> domains. For several weeks we were virus free, and only
> recently has a virus occasionally hit. And forgot about it
> until the thread here appeared.
>
> Now, we wonder if there is more to the virus filtering issue
> than has been disclosed. Fore example, are ISPs covertly
> assisting the authorities by not filtering, perhaps under
> willing or unwilling non-disclosure agreements.

Some of this does go on, however, the large ISP/NSPs are better than most about maintaining customer privacy in lieu of subpoenas.  Generally, records will be kept confidential without a subpoena unless there appears to be a pressing life-or-death issue at hand (at least at the places I have worked).  And after a very short while, you do get to know the local FBI agents/prosecuting attorneys/etc., so you generally know when you are being bullshitted. 

September 11th was a great example.  The FBI came in with Carnivores at every ISP/NSP I know of, yet most of them were refused access.  I have to admit, even I was surprised by this, although reassured.

> Some months ago we learned that Verio had been approached
> by British intelligence to yank files from our sites and after
> discussion with me Verio refused because the files did not
> violate Verio's use policy. However, I learned during that
> episode that law enforcment agencies often make requests
> to the law department of ISPs for cooperation without providing
> documentation of justification.

Every single day.  They lose nothing by "trying" you know ;-)

> A decision is made by the
> ISP legal rep on whether to comply, and that usually is based
> on the value judgment of the legal rep and familiarity with
> the LEA contacts and/or procedures.

I concur: see above.

> We learned from a friendly customer rep who happened to
> agree with our publication of forbidden docs, that ISPs' legal
> reps keep in touch with each other on how to respond to
> official requests for assistance, whether to notify the target,
> whether to comply quietly and what procedures to set up with
> the technical and customer support staff to deflect complaints
> and press inquiries, how to keep a lid on past covert assistance,
> and how to respond to competition which may decide to exploit
> non-cooperation with authorities lacking court orders or other
> enforcement.

Absolutely.  Would you expect anything else?

> After hearing this we better understand the possibility that
> sysadmins and customer support personnel may have a variety
> of reasons for refusing to filter besides indolence and poor
> service -- that snooping and snarfing systems may installed,
> that a dragnet operation may be underway which covers the
> territory of your machines though not necessarily targeting
> you, or you may in fact be a specific target, authorized or
> unauthorized.

If you are a target, the sysadmin staff will very likely not know about it.  You will be "snarfed" as you put it, by the engineering staff - quietly. I believe your sysadmin's "evasiveness" is based on their discomfort with explaining the highly technical and political issues at hand with a subscriber.

> To be sure, inadequate service may be an attempt to get you
> to upgrade your service contract -- as seems likely in our
> case with Verio -- or there may be competition within an
> ISP, particularly if it is a giant like Verio where departments
> are forced to compete with each other -- again as we have
> likely experienced with Verio.

The inadequate service is most certainly not by design, but rather by adminitrative accident coupled with technical issues.  For instance, Verio is known throughout the industry for having a horrendously performing network, due primarily to the "design" - Verio's growth was by gobbling up dozens of smaller local ISPs, and then "integrating" them into the Verio network.  So each of the local areas has to go through an unusually high number of routers to reach any given destination.  For example, in some parts of the United States, Verio subscribers need 12 hops (router interfaces) just to reach the Verio "backbone".  This is not a slam on Verio - _every_ network out there has serious flaws, these just happen to be the ones under discussion.  As a whole, the industry could have done a lot better, IMHO.

> We're now on our fourth iteration of Verio services, and would
> have moved on had Verio not bucked British intelligence and
> a few lesser attackers when other giants had cooperated.

Rewarding this type of behavior is the only way to go John!  Good for you!

> Still, we remain thoughtful about when Verio will do the dirty
> in the face of fearful terrorism or some other business opportunity
> to attack rather than be attacked.

Since Verio's acquisition, much has changed, but I have no knowledge of their current security practices - everyone I knew there has left or been laid off.


From: N

Within Australia, it is a legal requirement for carriers to support snooping systems. In the event of a customer being monitored, it may not be disclosed to the customer.

Cryptome: US law has the same provisions. Our interest is in the informal cooperation of ISPs with law enforcement where there is no supporting legal order.


From: D

Well, I'm not a Windows guy so virii are not my thing, but guessing about  your bot/siphon problem, this may be of assistance:

http://www-106.ibm.com/developerworks/library/l-fw/

It is specific to a linux 2.4 kernel firewall (iptables) but may be modified to other Unices.

If your attackers are sophisticated, they will change the mac-address and IP address randomly so there is little one can do, w/o closer analysis.  You do not provide much information regarding your firewall/mail system setup so it is difficult to provide specifics.


From: N

>Now, we wonder if there is more to the virus filtering issue
>than has been disclosed. Fore example, are ISPs covertly
>assisting the authorities by not filtering, perhaps under
>willing or unwilling non-disclosure agreements.

I think you're looking too hard for a conspiracy when simple bureaucratic friction is a more likely answer. Deploying a new feature and testing it at a major ISP is a whole lot of work. It's not clear that it contributes to the company's bottom line, either. The cost is clear (development and support), but the financial benefit is not. It's a boring explanation, but it is likely.

I'm writing because I find it interesting that some folks here *like* the idea of their ISP filtering content. I find that surprising! I assume it's motivated by the huge problem of viruses, but wouldn't it be better to fix the clients, not the pipes? There are a whole lot of risks in a network layer suddenly doing application-layer things.

What kind of viruses are you talking about, John? Most people who talk about filtering are thinking of email viruses. Lots of sites are filtering viruses at their gateway mail server, but that seems like a poor (if practical) solution to me. What we really need is email clients that aren't so stupid.


From: J

John Young, why are your web servers running virus-prone operating systems? Haven't you installed the Linux security patches on 'em and turned off all nonessential services?

I thought ISPs were supposed to be bit-pipes.  End-to-end unrestricted connectivity is the basic feature of the Internet.  This feature is what made the Internet superior to every preceding network.  If my ISP was filtering my mail or my packets, I'd complain.  (In fact, when they started to, I did complain until they changed it, and my web site still complains about it.)