- AFS distributed filesystem FAQ

 Home >  FAQ on different themes >

AFS distributed filesystem FAQ

Section 1 of 4 - Prev - Next
All sections - 1 - 2 - 3 - 4

Archive-name: afs-faq
Version: 1.113
Last-modified: 1950 Thursday 9th July 1998

AFS frequently asked questions

   This posting contains answers to frequently asked questions about AFS.
   Your comments and contributions are welcome (email:

   Most newsreaders can skip from topic to topic with control-G.
U  URLs: file:///afs/
Subject: Table of Contents:

   0  Preamble
      0.01  Purpose and Audience
      0.02  Acknowledgements
      0.03  Disclaimer
      0.04  Release Notes
      0.05  Quote

   1  General
      1.01  What is AFS?
      1.02  Who supplies AFS?
      1.03  What is /afs?
      1.04  What is an AFS cell?
      1.05  What are the benefits of using AFS?
            1.05.a  Cache Manager
            1.05.b  Location independence
            1.05.c  Scalability
            1.05.d  Improved security
            1.05.e  Single systems image (SSI)
            1.05.f  Replicated AFS volumes
            1.05.g  Improved robustness to server crash
            1.05.h  "Easy to use" networking
            1.05.i  Communications protocol
            1.05.j  Improved system management capability
U     1.06  Which systems is AFS available for?
U     1.07  What does "ls /afs" display in the Internet AFS filetree?
      1.08  Why does AFS use Kerberos authentication?
      1.09  Does AFS work over protocols other than TCP/IP?
      1.10  How can I access AFS from my PC?
      1.11  How does AFS compare with NFS?

   2  Using AFS
      2.01  What are the differences between AFS and a unix filesystem?
      2.02  What is an AFS protection group?
      2.03  What are the AFS defined protection groups?
      2.04  What is an AFS access control list (ACL)?
      2.05  What are the AFS access rights?
      2.06  What is pagsh?
      2.07  Why use a PAG?
      2.08  How can I tell if I have a PAG?
      2.09  Can I still run cron jobs with AFS?
      2.10  How much disk space does a 1 byte file occupy in AFS?
      2.11  Is it possible to specify a user who is external
            to the current AFS cell on an ACL?
      2.12  Are there any problems printing files in /afs?
      2.13  Can I create a fifo (aka named pipe) in /afs?
      2.14  If an AFS server crashes, do I have to reboot my AFS client?
      2.15  Can I use AFS on my diskless workstation?
      2.16  Can I test for AFS tokens from within my program?
      2.17  What's the difference between /afs/cellname and /afs/.cellname?
      2.18  Can I klog as two users on one machine in the same cell?
      2.19  What are the ~/.__afsXXXX files?

   3  AFS administration
      3.01  Is there a version of xdm available with AFS authentication?
      3.02  Is there a version of xlock available with AFS authentication?
      3.03  What is /afs/@cell?
      3.04  Given that AFS data is location independent, how does
            an AFS client determine which server houses the data
            its user is attempting to access?
      3.05  Which protocols does AFS use?
      3.06  Are setuid programs executable across AFS cell boundaries?
      3.07  How does AFS maintain consistency on read-write files?
      3.08  How can I run daemons with tokens that do not expire?
      3.09  Can I check my user's passwords for security purposes?
      3.10  Is there a way to automatically balance disk usage across
      3.11  Can I shutdown an AFS fileserver without affecting users?
      3.12  How can I set up mail delivery to users with $HOMEs in AFS?
      3.13  Should I replicate a ReadOnly volume on the same partition
            and server as the ReadWrite volume?
      3.14  Should I start AFS before NFS in /etc/inittab?
      3.15  Will AFS run on a multi-homed fileserver?
      3.16  Can I replicate my user's home directory AFS volumes?
      3.17  Which TCP/IP ports and protocols do I need to enable
            in order to operate AFS through my Internet firewall?
      3.18  What is the Andrew Benchmark?
U     3.19  Is there a version of HP VUE login with AFS authentication?
      3.20  How can I list which clients have cached files from a server?
      3.21  Do Backup volumes require as much space as ReadWrite volumes?
      3.22  Should I run timed on my AFS client?
      3.23  Why should I keep /usr/vice/etc/CellServDB current?
      3.24  How can I keep /usr/vice/etc/CellServDB current?
      3.25  How can I compute a list of AFS fileservers?
      3.26  How can I set up anonymous FTP login to access /afs?
      3.27  Where can I find the Andrew Benchmark?

   4  Getting more information
      4.01  Is there an anonymous FTP site with AFS information?
      4.02  Which USENET newsgroups discuss AFS?
      4.03  Where can I get training in AFS?
U     4.04  Where can I find AFS resources in World Wide Web (WWW)?
      4.05  Is there a mailing list for AFS topics?
U     4.06  Where can I find an archive of
      4.07  Where can I find an archive of alt.filesystems.afs?
U     4.08  Where can I find AFS related GIFs?
      4.09  Gibt es eine deutsche AFS Benutzer Gruppe?
      4.10  Donde puedo encontrar informacion en Espanol sobre AFS?

   5  About the AFS faq
U     5.01  How can I get a copy of the AFS faq?
      5.02  How can I get my question (and answer) into the AFS faq?
U     5.03  How can I access the AFS faq via the World Wide Web?

   6  Bibliography

   7  Change History

Subject: 0  Preamble

Subject: 0.01  Purpose and audience

   The aim of this compilation is to provide information about AFS including:

      + A brief introduction
      + Answers to some often asked questions
      + Pointers to further information

   Definitive and detailed information on AFS is provided in Transarc's
   AFS manuals ([23], [24], [25]).

   The intended audience ranges from people who know little of the subject
   and want to know more to those who have experience with AFS and wish
   to share useful information by contributing to the faq.

Subject: 0.02  Acknowledgements

   The information presented here has been gleaned from many sources.
   Some material has been directly contributed by people listed below.

   I would like to thank the following for contributing:

        Pierette Maniago VanRyzin (Transarc)
        Lyle Seaman (Transarc)
        Joseph Jackson (Transarc)
        Dan Lovinger (Microsoft)
        Lucien Van Elsen (IBM)
        Jim Rees (University of Michigan)
        Derrick J. Brashear (Carnegie Mellon University)
        Hans-Werner Paulsen (MPI fuer Astrophysik, Garching)
        Margo Hikida (Hewlett Packard)
        Michael Fagan (IBM)
        Robert Malick (National Institute of Health, USA)
        Rainer Toebbicke (European Laboratory for Particle Physics, CERN)
        Mic Bowman (Transarc)
        Mike Prince (IBM)
        Bob Oesterlin (IBM)
        Pat Wilson (Dartmouth College)
        Cristian Espinoza (Pontificia Universidad Catolica de Chile)
        Mary Ann DelBusso (Transarc)
        Michael Niksch (IBM)
N       Kelly Chambers (Transarc)

   Thanks also to indirect contributors:

        Ken Paquette (IBM)
        Lance Pickup (IBM)
        Lisa Chavez (IBM)
        Dawn E. Johnson (Transarc)
        David Snearline (University of Michigan Engineering)
        Rens Troost (New Century Systems)
        Anton Knaus (Carnegie Mellon University)
        Mike Shaddock (SAS Institute Inc.)

   If this compilation has any merit then much credit belongs to Pierette
   for giving inspiration, support, answers, and proof-reading.

Subject: 0.03  Disclaimer

   I make no representation about the suitability of this
   information for any purpose.

   While every effort is made to keep the information in
   this document accurate and current, it is provided "as is"
   with no warranty expressed or implied.

Subject: 0.04  Release Notes

   This compilation contains material used with permission of
   Transarc Corporation. Permission to copy is given provided any
   copyright notices and acknowledgements are retained.

   Column 1 is used to indicate changes from the last issue:

      N = new item
      U = updated item

   Changes from the last version are to be found at the end of this file.
Subject: 0.05  Quote

   "'Tis true; there's magic in the web of it;"         Othello, Act 3 Scene 4
                                             --William Shakespeare (1564-1616)
Subject: 1  General

Subject: 1.01  What is AFS?

   AFS is a distributed filesystem that enables co-operating hosts
   (clients and servers) to efficiently share filesystem resources
   across both local area and wide area networks.

   AFS is marketed, maintained, and extended by Transarc Corporation.
   AFS is based on a distributed file system originally developed
   at the Information Technology Center at Carnegie-Mellon University
   that was called the "Andrew File System".

   "Andrew" was the name of the research project at CMU - honouring the
   founders of the University.  Once Transarc was formed and AFS became a
   product, the "Andrew" was dropped to indicate that AFS had gone beyond
   the Andrew research project and had become a supported, product quality
   filesystem. However, there were a number of existing cells that rooted
   their filesystem as /afs. At the time, changing the root of the filesystem
   was a non-trivial undertaking. So, to save the early AFS sites from having
   to rename their filesystem, AFS remained as the name and filesystem root.

Subject: 1.02  Who supplies AFS?

        Transarc Corporation          phone: +1 (412) 338-4400
        The Gulf Tower
        707 Grant Street              fax:   +1 (412) 338-4404
        PA 15219                      email:
        United States of America   


Subject: 1.03  What is /afs?

   The root of the AFS filetree is /afs. If you execute "ls /afs" you will
   see directories that correspond to AFS cells (see below). These cells
   may be local (on same LAN) or remote (eg halfway around the world).

   With AFS you can access all the filesystem space under /afs with commands
   you already use (eg: cd, cp, rm, and so on) provided you have been granted
   permission (see AFS ACL below).

Subject: 1.04  What is an AFS cell?

   An AFS cell is a collection of servers grouped together administratively
   and presenting a single, cohesive filesystem.  Typically, an AFS cell is
   a set of hosts that use the same Internet domain name. 

   Normally, a variation of the domain name is used as the AFS cell name.

   Users log into AFS client workstations which request information and files
   from the cell's servers on behalf of the users.

Subject: 1.05  What are the benefits of using AFS?

   The main strengths of AFS are its:
      + caching facility
      + security features
      + simplicity of addressing
      + scalability
      + communications protocol

   Here are some of the advantages of using AFS in more detail:

Subject: 1.05.a  Cache Manager

   AFS client machines run a Cache Manager process. The Cache Manager
   maintains information about the identities of the users logged into
   the machine, finds and requests data on their behalf, and keeps chunks
   of retrieved files on local disk.

   The effect of this is that as soon as a remote file is accessed
   a chunk of that file gets copied to local disk and so subsequent
   accesses (warm reads) are almost as fast as to local disk and
   considerably faster than a cold read (across the network).

   Local caching also significantly reduces the amount of network traffic,
   improving performance when a cold read is necessary.

Subject: 1.05.b  Location independence

   Unlike NFS, which makes use of /etc/filesystems (on a client) to map
   (mount) between a local directory name and a remote filesystem, AFS
   does its mapping (filename to location) at the server. This has the
   tremendous advantage of making the served filespace location independent.

   Location independence means that a user does not need to know which
   fileserver holds the file, the user only needs to know the pathname
   of a file. Of course, the user does need to know the name of the
   AFS cell to which the file belongs. Use of the AFS cellname as the
   second part of the pathname (eg: /afs/$AFSCELL/somefile) is helpful
   to distinguish between file namespaces of the local and non-local
   AFS cells.

   To understand why such location independence is useful, consider
   having 20 clients and two servers. Let's say you had to move
   a filesystem "/home" from server a to server b.

   Using NFS, you would have to change the /etc/filesystems file on 20
   clients and take "/home" off-line while you moved it between servers.

   With AFS, you simply move the AFS volume(s) which constitute "/home"
   between the servers. You do this "on-line" while users are actively
   using files in "/home" with no disruption to their work.

   (Actually, the AFS equivalent of "/home" would be /afs/$AFSCELL/home
   where $AFSCELL is the AFS cellname.)

Subject: 1.05.c  Scalability

   With location independence comes scalability. An architectural goal
   of the AFS designers was client/server ratios of 200:1 which has
   been successfully exceeded at some sites.
   Transarc do not recommend customers use the 200:1 ratio. A more
   cautious value of 50:1 is expected to be practical in most cases.
   It is certainly possible to work with a ratio somewhere between
   these two values. Exactly what value depends on many factors including:
   number of AFS files, size of AFS files, rate at which changes are made,
   rate at which file are being accessed, speed of servers processor,
   I/O rates, and network bandwidth.

   AFS cells can range from the small (1 server/client) to the massive
   (with tens of servers and thousands of clients).
   Cells can be dynamic: it is simple to add new fileservers or clients
   and grow the computing resources to meet new user requirements.

Subject: 1.05.d  Improved security

   Firstly, AFS makes use of Kerberos to authenticate users.
   This improves security for several reasons:

     + passwords do not pass across the network in plaintext

     + encrypted passwords no longer need to be visible

          You don't have to use NIS, aka yellow pages, to distribute
          /etc/passwd - thus "ypcat passwd" can be eliminated.

          If you do choose to use NIS, you can replace the password
          field with "X" so the encrypted password is not visible.
          (These issues are discussed in detail in [25]).

     + AFS uses mutual authentication - both the service provider
       and service requester prove their identities

   Secondly, AFS uses access control lists (ACLs) to enable users to
   restrict access to their own directories.

Subject: 1.05.e  Single systems image (SSI)

   Establishing the same view of filestore from each client and server
   in a network of systems (that comprise an AFS cell) is an order of
   magnitude simpler with AFS than it is with, say, NFS.

   This is useful to do because it enables users to move from workstation
   to workstation and still have the same view of filestore. It also
   simplifies part of the systems management workload.

   In addition, because AFS works well over wide area networks the SSI
   is also accessible remotely.

   As an example, consider a company with two widespread divisions
   (and two AFS cells): and Mr Fudd, based
   in the New York office, is visiting the San Francisco office.

   Mr. Fudd can then use any AFS client workstation in the San Francisco
   office that he can log into (a unprivileged guest account would suffice).
   He could authenticate himself to the cell and securely access
   his New York filespace.

   For example:
       The following shows a guest in the AFS cell:
       {0} add AFS executables directory to PATH
       {1} obtaining a PAG with pagsh command (see 2.06)
       {2} use the klog command to authenticate into the AFS cell
       {3} making a HOME away from home
       {4} invoking a homely .profile $ PATH=/usr/afsws/bin:$PATH       # {0} $ pagsh                           # {1}
       $ klog -cell -principal elmer                    # {2}
       $ HOME=/afs/; export HOME              # {3}
       $ cd
       $ .  .profile                                                # {4}
       you have new mail
       guest@toontown $

   It is not necessary for the San Francisco sys admin to give Mr. Fudd
   an AFS account in the cell.  Mr. Fudd only needs to be
   able to log into an AFS client that is:
      1) on the same network as his cell and
      2) his cell is mounted in the cell
         (as would certainly be the case in a company with two cells).

Subject: 1.05.f  Replicated AFS volumes

   AFS files are stored in structures called Volumes.  These volumes
   reside on the disks of the AFS file server machines.  Volumes containing
   frequently accessed data can be read-only replicated on several servers.

   Cache managers (on users client workstations) will make use of replicate
   volumes to load balance.  If accessing data from one replicate copy, and
   that copy becomes unavailable due to server or network problems, AFS will
   automatically start accessing the same data from a different replicate copy.

   An AFS client workstation will access the closest volume copy.
   By placing replicate volumes on servers closer to clients (eg on same
   physical LAN) access to those resources is improved and network traffic

Subject: 1.05.g  Improved robustness to server crash

   The Cache Manager maintains local copies of remotely accessed files.
   This is accomplished in the cache by breaking files into chunks
   of up to 64k (default chunk size). So, for a large file, there may be
   several chunks in the cache but a small file will occupy a single chunk
   (which will be only as big as is needed).
   A "working set" of files that have been accessed on the client is
   established locally in the client's cache (copied from fileserver(s)).
   If a fileserver crashes, the client's locally cached file copies 
   remain readable but updates to cached files fail while the server is down.
   Also, if the AFS configuration has included replicated read-only volumes 
   then alternate fileservers can satisfy requests for files from those

Subject: 1.05.h  "Easy to use" networking

   Accessing remote file resources via the network becomes much simpler
   when using AFS. Users have much less to worry about: want to move
   a file from a remote site? Just copy it to a different part of /afs.

   Once you have wide-area AFS in place, you don't have to keep local
   copies of files. Let AFS fetch and cache those files when you need them.

Subject: 1.05.i  Communications protocol

   AFS communications protocol is optimized for Wide Area Networks.
   Retransmitting only the single bad packet in a batch of packets
   and allowing the number of unacknowledged packets to be higher
   (than in other protocols, see [4]).

Subject: 1.05.j  Improved system management capability

   Systems administrators are able to make configuration changes
   from any client in the AFS cell (it is not necessary to login
   to a fileserver).
   With AFS it is simple to effect changes without having to take
   systems off-line.
   A department (with its own AFS cell) was relocated to another office.
   The cell had several fileservers and many clients.
   How could they move their systems without causing disruption? 
   First, the network infrastructure was established to the new location.
   The AFS volumes on one fileserver were migrated to the other fileservers.
   The "freed up" fileserver was moved to the new office and connected
   to the network.
   A second fileserver was "freed up" by moving its AFS volumes across
   the network to the first fileserver at the new office. The second
   fileserver was then moved.
   This process was repeated until all the fileservers were moved.
   All this happened with users on client workstations continuing
   to use the cell's filespace. Unless a user saw a fileserver
   being physically moved (s)he would have no way to tell the change
   had taken place.
   Finally, the AFS clients were moved - this was noticed!

Subject: 1.06  Which systems is AFS available for?

   AFS runs on systems from: HP, Next, DEC, IBM, SUN, and SGI.

   Transarc customers have done ports to Crays, and the 3090, but all
   are based on some flavour of unix.  Some customers have done work to
   make AFS data available to PCs and Macs, although they are using
   something similar to the AFS/NFS translator (a system that enables
   "NFS only" clients to NFS mount the AFS filetree /afs).

   There is a client only implementation "AFS Client for Windows/NT".

N  A page describing the current systems for which AFS is supported
N  may be found at:
   There are also ports of AFS done by customers available from Transarc
   on an "as is" unsupported basis.
   More information on this can be found at:
   These ports of AFS client code include:
      HP (Apollo) Domain OS - by Jim Rees at the University of Michigan.
      sun386i - by Derek Atkins and Chris Provenzano at MIT.
      Linux - by Derek Atkins, mailing list: 
      NetBSD - by John Kohl, mailing list: 

   There is some information about AFS on OS/2 at:

N  The AFS on Linux FAQ may be found at:

Subject: 1.07  What does "ls /afs" display in the Internet AFS filetree?

   Essentially this displays the AFS cells that co-operate in the
   Internet AFS filetree.

   Note that the output of this will depend on the cell you do it from;
   a given cell may not have all the publicly advertised cells available,
   and it may have some cells that aren't advertised outside of the given site.

   The definitive source for this information is:


   I've included the list of cell names included in it below:                 #ASU         #Albert-Ludwigs-Universitat Freiburg                 #Argonne National Laboratory          # Argonne National Laboratory MCS Division FL    #Axlan-CEA               #Bloomsbury Computing Consortium                  #Boston University            #Brown University Department of Computer Science               #CASPUR Inter-University Computing Consortium,Rome              #CIESIN #CIP-Pool of Math. Dept, Univ. Stuttgart          #Caltech Computer Graphics Group               #Cards - Electronic Warfare Associates           #Carnegie Mellon Univ. Chemical Engineering Dept.                 #Carnegie Mellon University          #Carnegie Mellon University - Campus              #Carnegie Mellon University - Civil Eng. Dept.             #Carnegie Mellon University - Elec. Comp. Eng. Dept.              #Carnegie Mellon University - Mechanical Engineering              #Carnegie Mellon University - School of Comp. Sci.         #Carnegie Mellon University Computer Club                #CERT/Coordination Center      #Chalmers University of Technology - General users #CIP Pool, Rechenzentrum University of Stuttgart            #Clarkson University, Potsdam, USA         #Cornell University Materials Science Center    #Cornell University Program of Computer Graphics      #Cornell University Theory Center                  #DESY-IfH Zeuthen #Dartmouth College, Project Northstar                 #Deutsches Elektronen-Synchrotron                 #Deutsches Klimarechenzentrum Hamburg         #DIS, Univ. "La Sapienza", Rome, area Buonarotti            #EMSL's AFS Cell Tuebingen, WS-Pools                         #Energy Sciences Net         #Esprit Research Network of Excellence        #EMSL's DCE Cell                 #European Laboratory for Particle Physics, Geneva                #Fermi National Acclerator Laboratory         #Fachhochschule Heilbronn                #hephy-vienna      #HP Cupertino    #HP Palo Alto     #IBM Hursley Laboratories (UK), external cell                  #IBM UK, AIX Systems Support Centre           #IBM Zurich Internet Cell          #IBM/4C, Chalmers, Sweden          #IPP site at Greifswald                #IN2P3 production cell            #INFN Laboratori Nazionali di Gran Sasso, Italia              #INFN Sezione di Lecce, Italia              #INFN Sezione di Pisa    #Institut fuer Kernenergetik, Universitaet Stuttgart     #Institut fuer Plasmaphysik #Institut fuer Computeranwendungen, Uni. Stuttgart             #Iowa State University                 #Istituto Nazionale di Fisica Nucleare, Italia            #Jet Propulsion Laboratory        #Johannes-Gutenberg-Universitaet Mainz              #KTH College of Engineering           #Keio University, Fac. of Sci. & Tech. Computing Ctr          #Keio University, Japan  #Konrad-Zuse-Zentrum fuer Informationstechnik Berlin #Lehrstuhl A fuer Thermodynamik,TUM         #Leibniz-Rechenzentrum Muenchen Germany          #MIT/Athena cell             #MIT/Network Group cell            #MIT/SIPB cell                 #Michigan State University home cell     #Max-Planck-Institut fuer Astrophysik      #Multi Resident AFS at Naval Research Lab - CCS              #NTT Information and Communication               #National Energy Research Supercomputer Center             #National Institutes of Health                #National Renewable Energy Laboratory        #Naval Research Lab        #Naval Research Lab - Lab for Computational Physics     #Naval Research Laboratory            #NCSU - College of Engineering          #NCSU Campus                #North Carolina Agricultural and Technical State U.             #North Carolina State University - Backbone Prototype              #OSF Research Institute              #OSF Research Institute, Grenoble    #Otto-von-Guericke-Universitaet, Magdeburg
N       #OVPIT at Indiana University                 #PSC (Pittsburgh Supercomputing Center)                 #Penn State             #Physics Deptpartment, Brookhaven National Lab           #Pohang University of Science                #Princeton Plasma Physics Laboratory              #Real World Computer Partnership(rwcp)          #Rechenzentrum University of Jena, Germany          #Rechenzentrum University of Kaiserslautern    #Rechenzentrum University of Stuttgart
   rhic                    #Relativistic Heavy Ion Collider                 #Rensselaer Polytechnic Institute             #Rheinische Friedrich Wilhelm Univesitaet Bonn         #Rose-Hulman Institute of Technology      # Rose-Hulman Inst. of Tech., CS Department             #Royal Institute of Technology, NADA                #Rutherford Appleton Lab, England       #Stanford Linear Accelerator Center        #Stanford Univ. - Comp. Sci. - Distributed Systems         #Stanford University       #Supercomputer Computations Research Instit                 #Swiss Federal Inst. of Tech. - Zurich, Switzerland  #TH-Darmstadt                #Technical University of Braunschweig, Germany          #Technische Universitaet Chemnitz-Zwickau, Germany               #Telos Systems Group - Chantilly, Va.            #Transarc Corporation           #UC Santa Cruz, Comp and Tech Services, California                 #UMR - Missouri's Technological University                 #US High Energy Physics Information cell         #Uni Mannheim (Rechenzentrum)         #Univ California - Davis campus        #Univ. of Cologne Inst. for Geophysics & Meteorology      #Univ. of Cologne Inst. for Geophysics & Meteorology
N         #Univ. Rome-1, Dept. of Computer Science
U         #Univ. Rome-1, Area San Pietro in Vincoli
N          #Univ. Rome-3, Area Vasca Navale

Section 1 of 4 - Prev - Next
All sections - 1 - 2 - 3 - 4

Back to category FAQ on different themes - Use Smart Search
Home - Smart Search - About the project - Feedback

© | Terms of use