Talk:Oracle ZFS/Archive 1
Storage capacity
I'm removing this: "Sun believes that this capacity will never be reached, meaning that this filesystem will never need to be modified to increase its storage capacity. Although today such an assertion seems reasonable, a number of similar statements made in the past have been proven famously wrong." The quotes given on the only referenced sun page references Moore's law and give plenty of market spin ("According to Bonwick, it has to be. "Populating 128-bit file systems would exceed the quantum limits of earth-based storage. You couldn't fill a 128-bit storage pool without boiling the oceans.") but it is never stated that "Sun believes that this capacity will never be reached." Unsigned comment by 24.248.74.254 on 18:57, 9 June 2005 (UTC)
Regressing to stub
Thus far, Sun has not:
- Released the file system yet
- Given any hard technical details
- Committed to shipping ZFS in 2005
- Committed to the ZFS feature set
So, since Wikipedia is not a crystal ball, I think it would be best to remove most of the hype in this article and turn it back into a stub. —Ghakko 16:34, 18 July 2005 (UTC)
FYI, Regarding the release date. When I was in a Solaris 10 training class, the trainer mentioned that the ZFS developers plan on it being completed in Oct or Nov. However it has not been decided yet if ZFS will be released on it's own, or if it will be released with the first major update of Solaris 10, next spring. Also, the current version doesn't support ZFS as a boot device and when it is made public it probably won's support it either. That support will come later with an obp update. amRadioHed 01:38, 7 October 2005 (UTC)
- ZFS is released, stub notice removed.
Comparison with other filesystems
Once the ZFS specs become clear, suggest the page at Comparison_of_file_systems is updated with ZFS details. --Oscarthecat 07:58, 7 November 2005 (UTC)
It's escaped.
Source and binary for ZFS have been released. see [1].
Requested move
Since the moniker for ZFS is now actually inaccurate (see [2]) and the other articles that expanded to ZFS were all redlinks, save for this one, this article should occupy the 'main' ZFS name. --moof 18:06, 22 November 2005 (UTC)
Destubbify
I propose that the stub tag is long since obsolete and it should be removed. I propose to do so Monday, 11/28/05 unless there are serious objections. Georgewilliamherbert 20:13, 24 November 2005 (UTC)
- Also, the Request for Expansion... no notes here in the discussion page or the RFE central page on what they were looking for, and the article seems expanded to me. I propose also removing that tag on 11/28/05 barring serious objections. Georgewilliamherbert 20:24, 24 November 2005 (UTC)
Technobabble
Some terms in this article are in dire need of clarification. e.g. "automatic length and stride detection" - sounds more like something from a triple-jump event, also "deadline scheduling" --OscarTheCattalk 10:03, 23 February 2006 (UTC)
- I agree. The terms "volume management", "storage pool", "transactional object model", "block pointer", "target block", "metadata block", and "synchronous write semantics" need to be better explained. -P
- In true encyclopedian manner, these should be linked to seperate articles where they are properly explained, but certainly not here. --Puellanivis 00:54, 23 August 2006 (UTC)
- Some of these terms are industry generic, some are ZFS specific. The ZFS-specific ones should probably be explained here. The generic ones may either deserve WP pages or Wiktionary pages... Georgewilliamherbert 05:24, 23 August 2006 (UTC)
- In true encyclopedian manner, these should be linked to seperate articles where they are properly explained, but certainly not here. --Puellanivis 00:54, 23 August 2006 (UTC)
Advertisement for Sun?
Is it just me or does this whole article read like an advertisement? Ken 18:24, 24 April 2006 (UTC)
- There are a few mentions, but it seems OK. I've removed one superfluous mention by re-writing two sentences. Mindmatrix 18:30, 24 April 2006 (UTC)
Stranded Storage
Sun's marketing states "stranded storage". I'd like to happily pretend this means I can mirror data onto a drive, unplug and walk offsite with the drive, walk back with the drive a week later, plug it in, and have it automatically sync up. But any real information on this would be appreciated. Myren 01:42, 7 June 2006 (UTC)
Looks like Apple's interested in porting ZFS too
See http://www.osnews.com/story.php?news_id=14473
Linux
Is there any info available about the status of the effort to port this to Linux? Is this an internal Sun thing? or is there a project surrounding it? The artcile doesn't make this very clear.. --Quasar 15:07, 9 July 2006 (UTC)
- I believe that it would have to be integrated in the Linux kernel for support. That is impossible, seen as how the licenses of the Linux kernel and ZFS (GPL and CDDL) are not compatible. It's very unfortunate! —msikma <user_talk:msikma> 08:13, 24 August 2006 (UTC)
- You can read about the project at http://code.google.com/soc/opsol/appinfo.html?csaid=1EEF6B271FE5408B It has links to the project home page and a blog detailing his progress. --NapoliRoma 13:08, 24 August 2006 (UTC)
"advert" for Sun.
Not added to the page as yet is that ZFS lacks transparent encryption, a la NTFS, and that presently only n+1 redundancy is possible. n+2 redundancy (RAID level 6) is also in the development branch only - via the OpenSolaris distribution. These omissions in the production branch of Solaris, as of Solaris 06/06 current release - do diminish ZFS's attractiveness in a lot of the situations at whiich the new FS is targeted. Adding to the talk page because never contributed to the wikippaedia before, and some double checking needed anyway. I've been evaluating ZFS for some weeks now, so i'm fairly sure i'm correct :) My point being that backing up huge FSs (24TB on the latest 4U Sun 4600 box) is an absolute b*tch, and n+2 redundancy and transparent encryption are nearly standard. The workaround i had, of using PGP command line to encrypt on a x86 OpenSolaris build doesn't fly because PGP commandline is SPARC only as of writing, and loading OpenSolaris (for the n+2) on a SPARC box of large capacity negates all the nice support contracts.
By the by, i think the capacity specs "2^64 — Number of devices in any zpool" etc. are redundant technicalia in the main entry. The quotes given to evaluate what these specs mean are better descriptive.
One last point, there is no context given for the development of ZFS in the main entry. e.g. how to stripe huge volumes for throughput, historic limitations of other FSs (NTFS I'm looking at you, when you hang a quad core Xeon if a folder gets more than 20,000 objects!), how the choice of copy on write giving the snapshots is probably influenced more by backup difficulties and so on.
Someone could make a point or two about how general purpose CPUs make a fair alternative to proprietary embedded OSs such as those used by Adaptec and LSI Logic (on Intel XScale)and how this means less data abstraction when things go badly wrong i.e. with the embedded controllers, you've no chance to talk to your files save as volumes via the embedded BIOS, should you get a corrupted drive.
Last, but not least, given Sun's recent product launches, there ought to be a comparison to similar capability (capacity, throughput) RAID systems, such as from DataDirect and the whole interconnect problem with such a large array (Infiniband? FC? 10GbE? Switching Latency? and hence maybe even a touch upon the forthcoming NiagraII Sun chip which has 10GbE on - die) . . . in other words, how do you take advantage of all the "specifications goodness" :)
Will leave this for more seasoned contributors to do with as they see fit, but will revisit sometime soon.
Kind regards to all.
Unicode Support?
Does ZFS uses Unicode natively, or even support it? I think that might be an interesting tibit to add to the article. --Saoshyant 14:13, 11 September 2006 (UTC)
NetApp lawsuit
NetApp has sued Sun for patent violations in ZFS. I think a heading regarding these legal issues should therefore be added.Laxstar5 14:36, 6 October 2007 (UTC)
- Perhaps a "Controversy" section? Although I'm not really sure that the patent lawsuit is germane to the article; given the sue-happy nature of business in the US, if every tech article had a list of the patent lawsuits associated with it, they'd all be 100% larger than they are, and 50% less informative. Let's just note the facts. Rubicon 03:43, 8 October 2007 (UTC)
Checksums?
Sun's FAQ [3] indicates 64-bit checksums but the article indicates 256 checksums. Anyone have a justification for 256? --Treekids 21:45, 1 October 2007 (UTC)
- The link you provided, being from SUN, is likely accurate. I've updated the article. Rubicon 05:17, 5 October 2007 (UTC)
- Shoot. According to [4], ZFS does use 256-bit checksums. Now I'm upset. Looks like marketing hasn't been speaking with development. Reverting to 256. -Rubicon 07:56, 5 October 2007 (UTC)
Boiling the oceans
Perhaps we could remove the boiling the oceans quote since it really does not add anything to the article 66.68.63.11 06:34, 2 June 2007 (UTC)
- It adds to the history and understanding of the article: this quote is one of the notable things in ZFS's history, and it helps to put its architectural limitations (or lack thereof) in terms more people can understand. Almost all of us who are used to scientific notation still don't have a "feel" for numbers like 18.4 × 1018.... Hga 11:32, 2 June 2007 (UTC)
- Along the same lines, is it really necessary to express all the storage mentioned in the article as a power-of-two? I don't recall being overwhelmed with joy that the 120 GB hdd I bought two years ago had a capacity of a touch less than 237 bytes. Of course, when speaking of things like 'the number of entries in a directory', it's much neater (to my eyes) to say "248" than it is to say "281,474,976,710,656" - but do we really need to show, for example, 16 EB in terms of powers-of-two of bytes? Rubicon 07:20, 9 October 2007 (UTC)
- I find it amusing that while I can have a 16 EiB file, the filename limit is still 255 characters. Needless to say, the latter is significantly more relevant to me. Superm401 - Talk 11:11, 21 February 2008 (UTC)
- Could this be for POSIX compliance (NAME_MAX)?.--NapoliRoma (talk) 13:55, 21 February 2008 (UTC)
- Given that the math has been done within the article demonstrating that the boiling the oceans quote is factually inaccurate, really, shouldn't it be yanked? I came to this article to learn about ZFS, and I found that portion—right at the top of the article as it is—baffling and unhelpful. (As is the whole 2x business, which is meaningless to me and surely the great majority of readers.)--WaldoJ (talk) 18:04, 14 September 2008 (UTC)
- It shouldn't be removed but it's inaccuracy clarified (the german entry has some math on it, saying that it needs at least an 156bit filesystem to boil the oceans) 78.53.101.166 (talk) 14:55, 14 October 2008 (UTC)
limitations need citing and impartiality
The article states:
"ZFS lacks transparent encryption, a la NTFS, and presently only n+1 redundancy is possible. n+2 redundancy (RAID level 6) is only in the development branch—via the OpenSolaris distribution[1]. These omissions in the production branch of Solaris (as of Solaris 06/06 current release) diminishes ZFS's attractiveness in several situations at which it's targeted."
I think that this could be written more impartially, but still convey the same facts by being rewritten to read:
"Transparent encryption is still in the process of being implemented for ZFS. [5] Some features, including N+2 redundancy (RAID level 6), which are available in OpenSolaris and Solaris Express, are not yet available in Solaris 10."
Readers can draw their own conclusions about whether those limitations diminish ZFS's attractiveness.
At a minimum, the situations in which ZFS's attractiveness is diminished need to be explicitly listed. Better yet, a published article backing up this claim could be cited.
(For the record, I am a ZFS developer.)
Mahrens 06:02, 12 September 2006 (UTC)
Citation needed.
Will some expert kindly provide a source for the sentence quoted below, so that I can reach my goal of paring down the Citation Needed references on the following page? http://en.wikipedia.org/w/index.php?title=Category:Articles_with_unsourced_statements&from=Z
The quota model and other useful management capabilities suggest the possiblity of per-user filesystems, rather than simple home directories.[citation needed]
Sincerely, GeorgeLouis 06:27, 29 October 2006 (UTC)
128 bit?
Seeing as all the limits are 2^64 or less, I wonder why the 128 bit denomination. Anyone care to explain? It's probably something worth mentioning in the article. -Anonymous —The preceding unsigned comment was added by 83.138.218.1 (talk) 19:50, 19 December 2006 (UTC).
- I think you're right. I don't see anything 128-bit'ish about this filesystem. According to the article, it certainly can't store more than 278 bytes in a disk array, and it can only fill that if you create 16384 filesystems. There's a conflict between the specs and the claims in the Capacity section. It seems like "128-bit" is at best half-true marketing hype from Sun. If there's any basis for this number at all, it should be addressed in the article. Either way, someone who knows should definitely explain this. There are a lot of skeptics. -- Bilbo1507 00:13, 26 June 2007 (UTC)
- (Responding to myself.) I happened upon this discussion. This really needs to be explained in the article. I'd write it, but I don't feel qualified to do so. Is there someone willing to write it who is familiar enough with ZFS to write a robust explanation and cite sources? http://linux.slashdot.org/comments.pl?threshold=0&mode=nested&commentsort=0&sid=238977&cid=19569673 -- Bilbo1507 19:28, 5 July 2007 (UTC)
- Is it something like what MS did with NTFS? NTFS is a 64 bit filesystem but the first few implementations were 48 bit. So is it like 128 bit design but current implementations are limited to 64 bits? --soum talk 19:47, 5 July 2007 (UTC)
- Apparently, POSIX defines no 128-bit interfaces yet, 64 being the max available. SUN could've written their own libraries, but that would've broken POSIX compliance. So while I think it's nice that SUN can say they have a 128-bit, POSIX-compliant FS, given the situation, they could've just as easily said a 256-, 384-, or 512-bit FS. I'm not going to rock the boat and say "ZFS isn't 128-bits!", but for all practical purposes, ZFS is (for the time being) a 64-bit FS. Presumably, once POSIX catches up, it should be a (relatively) simple matter to scale-up to 128-bit, given that ZFS' block pointers allocate a 128-bit address. [6] -Rubicon 08:26, 5 October 2007 (UTC)
- Personally I don't know why limit anything nowadays to 32, 64, 128 or any number of bits... why at all choose any size word, dword, qword or whatever?
For having file or memory allocation subsystem of any size one could use start/end delimiters (alt. escape sequences) and process them from memory/disk. Doubling any delimiter in case of data=delimiter, can be used to make it fully data-transparent. Any structure on the stream can be referenced through this logical means providing effectively endless storage to be accessed/referenced. Even logical addresses themselves can be made extensible to any size needed in this way, without theoretical need to touch code at all. So we can write this ghost FS today and use it forever. ZFS, as any other modern FS is looking more and more like database - using logs, user/os isolation, two phase comit, compression, blocks/clusters, redo logs and similar, reinventing the wheel only on a lower level. At the same time my feeling is we are only increasing limits but still keeping them to be hit one day. Remember, 640k ought to be enough for everyone. -- Anonymous 18:55, 1 June 2009 (UTC) —Preceding unsigned comment added by 89.201.145.160 (talk)
Apple / Solaris
The listed operating systems for ZFS are as follows: "Supported operating systems Solaris, Mac OS X v10.5". OS X hasn't even been released yet. Other OS' IE: FreeBSD have a port in progress; however, they are not listed in the description as being supported. Suggestion; remove OS X 10.5 from the list until the release of OS X 10.5 has been confirmed with ZFS.
- Done. Please sign your posts. Chris Cunningham 11:28, 29 December 2006 (UTC)
zfs support seems to be gone from build 9a410. It's no longer in the GUI and there appear to be no command-line binaries, anymore, fwiw Jgw 02:09, 5 May 2007 (UTC)
Lead
The lead says "It is notable..."; however that sounds PoV (as it implies deliverately drawing attention to some way of interpretation). Please fix it to be something neutral. --soumtalk 07:06, 14 March 2007 (UTC)
- I made the change. I am open for a discussion if anyone is opposed to it. --soumtalk 15:06, 20 March 2007 (UTC)
ZFS administration gui
http://blogs.sun.com/talley/entry/manage_zfs_from_your_browser —The preceding unsigned comment was added by 71.7.147.153 (talk) 04:19, 18 April 2007 (UTC).
Z-FS Disambiguation...
I was doing research on several NAS devices that claimed to support Z-FS... it turns out that it is not ZFS but instead a filesystem by a company named Zetera trademarked Z-FS. The only details on the FS I can find are in a marketing blurb here:
http://www.zetera.com/index.php?option=com_content&task=view&id=4&Itemid=7
Should there be a disambiguation note about this since I have seen this cropping up on several different NAS devices? Especially since several of them omit the hyphen and actually list it as ZFS on some of their manuals/dialog boxes.
Does anyone know which came first (trademark wise) ZFS from solaris or Z-FS from Zetera (more for curiosity than anything)
Aelana 18:39, 19 April 2007 (UTC)
- Done. --soum (0_o) 20:08, 19 April 2007 (UTC)
Confusing use of "file system"
The article sometimes uses the term "file system" to mean "a file system type", and sometimes to mean "an instance of a file system. This makes the article quite confusing. Especially confusing is this sentence: "Unlike a traditional file system, which resides on a single device and thus requires a volume manager to use more than one device, ZFS is built on top of virtual storage pools called zpools". Does "reside" mean the same as "use" here? If not, what is the difference? If yes, the sentence is contradictory: it says that traditional file systems reside on a single device, but sometimes not. I do not know the correct technical terms that distinguish the type from the instance. Please update the article with the correct unambiguous terms. -Pgan002 18:33, 30 April 2007 (UTC)
- Been looking at the Storage pools section of the article, and, yes, it is probably confusing for someone who's never dealt with something like LVM. Should I rewrite it, maybe, with more of a layperson in mind? -Rubicon 05:33, 5 October 2007 (UTC)
On Portal:Free software, ZFS is currently the selected article
Just to let you know. The purpose of selecting an article is both to point readers to the article and to highlight it to potential contributors. It will remain on the portal for a week or so. The previous selected article was OpenDocument. Gronky 12:24, 28 May 2007 (UTC)
- The selected article box has been updated again, ZFS has been superceded by Emacs (to mark the release of GNU Emacs v22). Gronky 13:20, 5 June 2007 (UTC)
ZFS in OS 10.5
It only seems logical to me that if ZFS is the default fs in OSX, then it must be able to act as a root partition. The statement in the article that says otherwise is outdated, yes? --70.91.110.41 21:15, 6 June 2007 (UTC)
Its nothing but a rumor until Apple makes it official. Since when did wiki become a rumor site?
Marc Hamilton has backed away from his original 'confirmation' in the comments section of his blog. He says he has no knowledge of Apple's product plans.
- Here is a link to confirmation that it will NOT be included in Leopard. [7] Should we remove the rumor then?--147.160.136.10 17:42, 12 June 2007 (UTC)
- It was not a rumor - It's widely known that Apple's OS team had ZFS running. That said, it's clearly not in Leopard, per yesterday and today's Apple announcements. We could speculate why or what's going on, but that's pointless, and Wikipedia is not supposed to speculate. It's not in Leopard, and so it should be off the list of supported OSes. Georgewilliamherbert 17:51, 12 June 2007 (UTC)
- Annnd it's back. See [8] in which Apple clarifies that it's in there but not available as a default filesystem or bootable filesystem. Sort of. The clarification isn't entirely clear, but they do say that ZFS is on the system. Georgewilliamherbert 00:44, 13 June 2007 (UTC)
- It was not a rumor - It's widely known that Apple's OS team had ZFS running. That said, it's clearly not in Leopard, per yesterday and today's Apple announcements. We could speculate why or what's going on, but that's pointless, and Wikipedia is not supposed to speculate. It's not in Leopard, and so it should be off the list of supported OSes. Georgewilliamherbert 17:51, 12 June 2007 (UTC)
Storage Units
Reading [this], ZFS' ultimate limitation is stated in terms of ZB, not ZiB; I therefore infer that where the article states EiB, it would be more accurate to use the term EB. Can't find supporting evidence for the 'max size of single file', 'max size of an attribute', etc. I've never liked the practice of using binary prefixes - always seemed like a way for a storage vendor to scam a customer into thinking they were getting more storage than they actually were. Does anyone have any references for the numbers given in the article - and proper storage units? Rubicon 07:09, 9 October 2007 (UTC)
ZFS in the release version of Mac OS X 10.5
I've tried to update the info on the current Mac OS X implementation, but I was not sure what of the older rumors/announcements to leave in. I hope I've found a good compromise, using as much as possible of the original text. Do you agree? Xnyhps (talk) 14:38, 30 December 2007 (UTC)
ZFS vs. zFS; google hits
In my last edit summary, I mentioned that zFS (as in Z/OS) had "fewer than 200 Google hits". I don't know what mutated search I did then, because I just tried it again and got more like 17,000 hits. This still seems to me to be fewer than would justify having it specifically mentioned in the lede, though.--NapoliRoma (talk) 21:58, 17 February 2008 (UTC)
Casablanca?
No comment on what was discussed, but Linus Torvalds and Jeff Bonwick have been talking http://blogs.sun.com/bonwick/entry/casablanca Robmbrooks (talk) 10:17, 21 May 2008 (UTC)
- The Linux compatibility part in the main article is wrong: The CDDL allows to combine CDDL code with code from any other license and the GPL does not forbid to link a GPL work against CDDLd code. Because of the compatibility with all other licenses, Sun will not sue people who use ZFS (as long as they follow the CDDL) and Sun is happily waiting for people who like to sue Sun because of a CDDL/GPL combination in order to defend against the attempt to forbid it. —Preceding unsigned comment added by 87.158.110.240 (talk) 17:50, 14 September 2008 (UTC)
Linux section: OR?
I'm concerned that the linux section suffers from original research. The suggestion that because ntfs-3g (a FUSE filesystem) performs "well", based on a single benchmark, is used to suggest that the zfs port could perform "excellently". This is a bit of a leap of faith logically. -- 87.194.117.25 (talk) 14:34, 24 May 2008 (UTC)
Now it reads: "This shows that reasonable performance is possible with ZFS on Linux after proper optimization." Are there any data supporting that? ZFS and NTFS are very different. I think this comparison is inappropriate and should be removed. At the very minimum, it should be worded a lot more careful. 85.177.245.75 (talk) 15:09, 10 March 2009 (UTC)
Storage Pools
Can the information on storage pools be clarified or described in non-tech terms? I understand it this way, all drives show up as one device. If I have a computer with a 500GB internal drive formatted with ZFS (I'll call it Zelda), and I plug-in a 500GB Firewire ZFS drive, it will automatically expand the capacity of the Zelda volume to 1TB? Any drive I add or remove will merely change the total available disk space under one volume? --24.249.108.133 22:29, 16 July 2007 (UTC)
- See this; o.p. has the wrong idea about how zpools, vdevs, etc. function. -Rubicon 05:38, 5 October 2007 (UTC)
The section about devices "... designated as volatile read cache (ARC)" is confused. I have replaced it with a a description of SSDs used as L2ARC devices (readzilla). The volatile ARC does in fact exist, but does not require any special device - it is just (kernel) memory, while the write cache (or "readzilla", used to speed up synchronous writes needed for some POSIX semantics) requires SSDs with good write performance and endurance. - Henk Langeveld (talk) 00:58, 25 December 2009 (UTC)