> Time Machine, only works over a LAN with destinations that support AFP. This is at least in part because of Time Machine's reliance on Unix hard links, and also in part because it has to be able to ensure that any OS X files with HFS+ specific metadata are correctly preserved.
This is not the reason. Time Machine does support hard links, legacy Mac metadata, and other Unix features. It does this by writing all the data into large blobs (a sparse bundle) with an embedded filesystem of its choosing (i.e. HFS+). It can use any destination filesystem for the blobs, including FAT.
Finally. Someone at Apple must be a Bukowski fan. I'm reminded of his poem "16 Bit Intel 8088 Chip" (not his greatest, but suitable):
http://bukowskiforum.com/threads/16-bit-intel-8088-chip.2791...
Back in the early nineties I worked at Miramar Systems on an AFP server and actually a full AppleTalk stack that ran on Windows 3.11 (VxDs!) and OS/2. Macs could run full AFP and whatever the printer protocol was called to a network of PCs.
IBM sold a version of our stuff that was called LanServer for Macintosh so back then Macs and AFP were covered!
It was quite a popular product at the time. Although I never enjoyed working on Macs I thought that AFP was pretty cool. We all had "Inside AppleTalk" pretty much memorised - what a great book.
I would have preferred NFSv4 over SMB2. They are quite similar technically, but the former has less chance of veering off into supporting strange Windowsims which will be hard to translate to a POSIX client. That said, SMB2 is widely deployed and Microsoft is innovating in SMB faster than NFS is improving.
Fortunately OS X does not use Samba as their SMB2 client.
This is great. I can finally interoperate with linux and windows.
Every time I connect with AFP, my CPU would spike to 100% under Ubuntu.
Can someone chime in, with the pros and cons of each network filesystem. And which is a good fit for Linux - or rather for those OSs that don't need to cooperate with Windows? Was NFS ever updated - or replaced? How much of SMB is now open after court rulings? And is their one that is technically better than another?
OS X's interchangeability with PCs is actually more badly broken than this. This is mind boggling because if Apple can get this one thing right more people would be willing to buy Mac Mini and put on their home networks. I recently tried to use external device full of NTFS formatted hard drives on Mac Mini. First thing I discovered was that OS X can't natively write to NTFS formatted drives. Even after you discover and purchase 3rd party apps that enables writing to NTFS formatted volumes, OS X can't share them via SMB. This is because Apple's own SMB implementation that they tried to replace is broken. So you have to disable that and install open source SMB anyway. There are quite a bit of hoops to accomplish this.
So there is no built-in way to share your external drives connected to Mac Mini on network if they are NTFS formatted.
I'm hoping this results in vastly improved SMB support, which I am in full agreement with other commenters, has been infuriating since Apple decided to roll their own. I frequently hop to my Windows machine to manage my Windows Home Server even though I'm just doing simple SMB communication and file cleanups that should work fine in OS X, but don't.
Related: I take it there is no maintained open source SMB server that isn't GPL3 these days? Sucks since apple abandoned samba2. How stupid would it be to use apple's old samba2 for an appliance? (guess: very?)
SMB has an extension mechanism and SMB 1 has support for Unix extensions for over 15 years - I was the author of the original Unix extensions spec. You can get full Unix semantics using them (links etc).
The predominant form of extension is an "info level". Somewhat analogous to a data structure like that returned from stat, the numeric info level controls what structure is returned (or supplied). Microsoft had a tendency to add new info levels that correspond to whatever the in-kernel data structures were in a particular release rather than longer term good design.
The general chattiness comes from their terrible clients like Windows Explorer (akin to Finder for Mac folk). I once did a test opening a zip file using using Explorer. If you hand crafted the requests it would have 5 of them - open the file, get the size, read the zip directory from the end of the file, close it. Windows XP sent 1,500 requests and waited synchronously for each one to finish. Windows Vista sent 3,000 but the majority were asynchronous so the total elapsed time was similar.
I worked on WAN accelerators for a while where you can cache, read ahead and write behind, in order to provide LAN performance despite going over WAN links. In one example a 75kb Word memo was opened over a simulated link between Indonesia and California. It took over two minutes - while instantaneous with a WAN accelerator. The I/O block size with SMB is 64kb so they could have got the entire file in two reads, but didn't.
If anyone is curious about what it was like writing a SMB server in the second half of the nineties I wrote about it at http://www.rogerbinns.com/visionfs.html