Jump to content

Wikipedia:Reference desk/Computing

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Tatsh (talk | contribs) at 21:16, 26 January 2011 (Qualcomm Brew Jailbreak). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Welcome to the computing section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


January 20

Laptop

Hello. I recently purchased a Dell D620 laptop. It is running Windows Vista Buisness 32-Bit. I want to install Ubuntu, preferably the 64-bit version. How do I know if the hardware can support a 64-bit version? --T H F S W (T · C · E) 03:17, 20 January 2011 (UTC)[reply]

Download and run GRC SecurAble. 118.96.159.107 (talk) 05:31, 20 January 2011 (UTC)[reply]
See Dell Latitude#Latitude_D620. It has a 64 bit cpu but apparently some brain damage in the memory architecture. 67.122.209.190 (talk) 06:51, 20 January 2011 (UTC)[reply]
I am suspicious of the article's claim of "brain damage in the memory architecture" given the lack of sources and poor grammar. It looks like it was added by a single disgruntled user whose problem could have had any number of other causes. -- BenRG (talk) 09:03, 20 January 2011 (UTC)[reply]
I took out some of the info at Dell Latitude#Latitude_D620. I found this [1], which describes a Dell D620 running ubuntu (but interestingly enough, with a 3.3GB limitation possibly caused by the so-called "brain damage"). decltype (talk) 10:03, 20 January 2011 (UTC)[reply]

The easiest way to install any version of linux is with a live DVD off of which linux will run if you so choose (you will need to change the bios settings to boot from a cd/dvd before the hard drive but in many cases the computer is already setup to do this - probably to make system restore dvds easier to use). There are separate dvds for 32 bit and 64 bit. If you try to use 64bit live media without the corresponding hardware it will tell you. Best of luck, I quite like Ubuntu. Mattbondy (talk) 03:09, 25 January 2011 (UTC)[reply]

COMPUTER SCIENCE ENGINEERING

WHAT IS THE IMPORTANCE OF CHEMISTRY IN COMPUTER SCIENCE ENGINEERING? —Preceding unsigned comment added by 117.198.33.180 (talk) 11:51, 20 January 2011 (UTC)[reply]

Chemistry is important in electronics. Electronics is important in computer science engineering. If you have no interest in the hardware, I strongly suggest you switch majors to something like computer information science or computer media science. -- kainaw 14:27, 20 January 2011 (UTC)[reply]
Please also don't type in capital letters. Online, it's considered the equivalent of shouting. Chevymontecarlo 17:38, 20 January 2011 (UTC)[reply]

Internet Explorer RSS feeds

Google Chrome is my main browser but I retain Internet Explorer 8 for RSS feeds as Chrome doesn't support RSS without installation of an application. Recently, when I open IE8, my RSS feeds do not display on screen; there is only a blank white screen. What is wrong with the RSS feeds? Could I export my RSS feeds to Google Chrome? What should I do? --Blue387 (talk) 12:36, 20 January 2011 (UTC)[reply]

Linux calendar sync

I have three google calendars, two exchange calendars, and a droid calendar. Is there a program in Linux that can manage syncing all of the calendars to one another? It appears that exchange is the big hurdle because every solution I've found requires a working install of Outlook - which is not reasonable in Linux. -- kainaw 14:25, 20 January 2011 (UTC)[reply]

I have not tried this, but have you looked into Sunbird? It appears to have plug-ins for all the things you mentioned. How well they work, I couldn't say. APL (talk) 15:26, 20 January 2011 (UTC)[reply]
It appears to me that Sunbird is a simple way of getting Thunderbird+Lightning. I tried Ligthning and I couldn't get it to load any events from either Google or Exchange with each of those plugins. The documentation is extremely lacking. For example, it asks for the "location" of the exchange mailbox, but gives absolutely no hint about what the format of the location should be. I tried numerous guesses based on my exchange login information and all I got was numerous empty calendars. -- kainaw 16:12, 20 January 2011 (UTC)[reply]

CPU examples

5 examples of central processing unit —Preceding unsigned comment added by 41.220.69.6 (talk) 17:03, 20 January 2011 (UTC)[reply]

Please see central processing unit. Your question appears to be homework. We will assist with concepts and ideas, but will not answer your homework questions for you. -- kainaw 18:03, 20 January 2011 (UTC)[reply]
Have a look at Intel's processor web page. For a very boring answer to your homework assignment, you can submit the following list of five example Intel i7 steps:
Intel Core i7-860  SLBJJ(B1)
Intel Core i7-860s SLBLG(B1)
Intel Core i7-870  SLBJG(B1)
Intel Core i7-870s SLBQ7(B1)
Intel Core i7-875k SLBS2(B1)
If When your teacher complains, just say you were following the machine instructions. Did you know that there are other types of CPU besides the Intel i7 8xx series? Nimur (talk) 20:29, 20 January 2011 (UTC)[reply]

Is a PDF-file in some way more portable than a ZIP-file containing .PNG images?

Is a PDF-file in some way more portable than a ZIP-file containing .PNG images? (When moving content back and forth between various platforms).
--Seren-dipper (talk) 18:44, 20 January 2011 (UTC)[reply]

The images-in-PDF would be easier to view on an iPad/iPhone/iPod-touch than a zip-of-images; I think the same is true for Blackberry, but I don't know about Android in this regard. In general many devices which fancy themselves eBooks will render a PDF (well or badly), but won't know what to do with a general zip file. -- Finlay McWalterTalk 18:58, 20 January 2011 (UTC)[reply]

No, particularly not with the popularization of Comic Book Archive file readers. PDF is an awful format. :p ¦ Reisio (talk) 19:04, 20 January 2011 (UTC)[reply]

Can you say specifically what's awful about it? -- BenRG (talk) 03:26, 21 January 2011 (UTC)[reply]
In lieu of a long rant about the philosophy of data portability, consider reading Technical Information from the Library of Congress's National Digital Information Infrastructure and Preservation Program. If you really want to be truly platform portable, you should provide source-code for your documents, a free and open-source program that can render the document, and instructions for use.
If all you seek is portability across a few major modern operating systems, like Mac OS X, Windows XP/Vista/7, and Ubuntu, then either PDF or archived PNGs will work. All you need to do is evaluate the ease-of-use and convenience of each option. I will also comment that both PNG and ZIP technology and file formats are totally unencumbered by any proprietary IP or copyright claims; this is very good for long-term data storage. Nimur (talk) 20:01, 20 January 2011 (UTC)[reply]
Patents expire in 20 years, so they don't matter as far as long-term data storage is concerned. The PDF format was published as an ISO standard in 2008, and there are free-software implementations of it. Knowing all that, do you still have an objection to the use of PDF as an archival format? -- BenRG (talk) 03:26, 21 January 2011 (UTC)[reply]
There's unfortunately more to it than that. PDF, part of an open specification or not, is controlled by Adobe, and since Adobe's reader has massive penetration and others do not, whatever their reader supports—in the spec or not—is a de facto standard that people will expect to be supported. After that we get into what happens if Adobe ever goes under or decides it doesn't like PDF. Consider what happened when Oracle bought Sun, and how now the OpenOffice, Java™, and MySQL projects are suffering, despite having been open source. ¦ Reisio (talk) 04:16, 22 January 2011 (UTC)[reply]
How exactly are those projects suffering? Oracle is releasing new versions of all of the products you mentioned and they're still very popular. As for PDF, why don't you tell us what you think will happen if Adobe doesn't like PDF? The format has seen very little change since version 1.5 in 2003, yet it's never been more popular. Many open source projects, like OpenOffice.org, have embraced PDF, too.--Best Dog Ever (talk) 05:27, 22 January 2011 (UTC)[reply]
http://www.geekgumbo.com/2010/11/06/the-decline-of-mysql/, Oracle Corporation#Acquisition_of_Sun_Microsystems, LibreOffice#History. ¦ Reisio (talk) 19:28, 22 January 2011 (UTC)[reply]
PDF is a scalable vector format like SVG, not a raster format like PNG, though (like SVG) it supports embedded raster images. It allows selectable, searchable text to be overlaid on a raster-image background; alternatively, the text can be made invisible, allowing scanned book images to be searched and copied as plain text without altering their original appearance. Among the bitmap formats it supports are JPEG2000, which losslessly compresses photos a lot better than PNG, and JBIG2, which losslessly compresses low-color images a lot better than PNG. Zipped PNGs can't do any of these things. If all you need is a bundle of losslessly compressed raster images then a ZIP archive of PNGs is pretty portable, but it makes little sense to compare it to PDF.
Since people are talking about PDF I should mention DjVu, though I don't know which if any ebook readers support it. -- BenRG (talk) 03:26, 21 January 2011 (UTC)[reply]
You just compared ZIP'd PNGs with JPEG2000 inside PDF. My question is this: why? Actually that's rhetorical. JPEG2000 images inside a ZIP, done. Read the original question. ¦ Reisio (talk) 04:16, 22 January 2011 (UTC)[reply]
Ridiculous. The file size of a bunch of PNGs would be incredible and the resources needed to zoom in on high-resolution bitmap images one at a time would make a very frustrating reading experience. (I'm assuming this is an e-book.) I've made many e-books myself from PNGs, so I'm speaking from experience. By the way, the PDF supports ZIP compression. You can choose to compress text and vectors using ZIP and images using either ZIP, JPEG, or JPEG2000.
There are also advantages when it comes to searching for text in a PDF as compared to PNGs, which can't be searched at all. You can also add bookmarks and hyperlinks to PDFs.--Best Dog Ever (talk) 05:22, 22 January 2011 (UTC)[reply]
The question was about PDF versus zipped PNGs; that's why I compared those two things specifically. I don't dislike ZIP or PNG, nor do I especially like PDF (DjVu is probably better for scanned documents, which is why I mentioned it). But I get upset when people express strong opinions about which things are good or bad despite apparently having no factual knowledge of the things themselves. There's an emotional sense that certain things are "proprietary" and other things are "open" that doesn't seem to have much connection to reality. PKWARE, the original publishers of the ZIP specification, have continued to publish revised versions adding strong encryption, Unicode filenames, and a variety of new compression methods, among other things. WinZip, which I believe is overwhelmingly the most popular utility for creating and extracting ZIP archives, supports these features, while the support in free implementations is much spottier. WinZip has also added its own extensions that aren't in the PKWARE standard, exploiting its position as market leader. This is what you said you feared would happen to PDF, and it actually has happened to ZIP. Does that make ZIP a bad archival format? -- BenRG (talk) 06:59, 22 January 2011 (UTC)[reply]
First, I'd like to clarify for the record: I don't think I ever said (or even implied) that PDF is a "bad" format. It has definite advantages, particularly with respect to search-able text, high compression ratios for vectorized graphics, etc. And it does have free and open-source implementations; and as BenRG has correctly pointed out, the PDF format is (now) an ISO standard.
I think we also need to be very clear: "PDF" is not identical to "anything that Adobe Acrobat spits out with a .pdf extension" - and "ZIP" is not identical to "anything that WinZip spits out with a .zip extension." In general, there are standard data formats and there are (free and non-free) implementations of programs that work with those data formats. I strongly recommend using standard data formats - without extensions (whether free or proprietary). If a data format extension (free or non-free) is useful, it will eventually be standardized and alternative programs will be developed that can handle them. The crux of data portability is, by definition, not having to rely on a single platform implementation.
But I will also point out that the more esoteric your platform, the more you need to worry about "portability." While it is definitely true that PDF has open-source implementations, I doubt they would be easy to port to a non-(Windows/Mac/Linux) computer. On the other hand, ZIP, PNG, JPEG, and many other program implementations are trivially ported to any platform (like a mobile or embedded device). "Portability" has a mystique associated with it; it is presented as a binary "yes/no" of whether a particular thing can be ported to a particular platform. In reality, portability is a measure of effort required to make something work on a new system. Standard formats are easier to "port" because somebody else will have done the work for you. Simple formats are easier to "port" if you have to do the work yourself. As a programmer, I would rather have to implement a ZIP/PNG decoder on a new hardware and software platform, rather than a PDF reader - even though source code is available for both formats. As a user, I would rather have my data in PDF format, because it has the numerous advantages listed above, and because it is so popular, I can generally trust that some other programmer has already done the work to port it to my new platforms. Nimur (talk) 18:33, 22 January 2011 (UTC)[reply]
I think the 'bad format' issue was in reply to Reisio who did seem to say it was although I understand the need to be clear you weren't and didn't say it was a bad format. Nil Einne (talk) 11:15, 23 January 2011 (UTC)[reply]
Considering that I have encountered a huge number of websites which distribute documents in PDF format and have encountered exacly none who use zipped PNG files. Whilst most OSes are capable of opening a zip file and displaying PNGs, as Best Dog Ever says, reading them could easily become a very frustrating experience. Furthermore, there are a number of programs which will produce PDF output for free, so the previously high cost of authoring PDF files (when it was in Adobe's hands only) is no longer an issue. It is also worth considering that PDF files often have controls to protect the content from modification/distribution if that is important to you - zipped PNGs have no such controls. That said, if you are simply distributing a web comic, via a torrent for example, then I can imagine a Comic Book Archive file or zipped PNGs are as good as any other and may provide the hosting site with an easy way to provide visitors with preview pages. Astronaut (talk) 14:36, 22 January 2011 (UTC)[reply]

Thank you, all, for your input! :-)
--Seren-dipper (talk) 07:09, 25 January 2011 (UTC)[reply]

Dual-boot system

I have some questions about dual booting. I can figure out how to partition the hard drive, but I can't get the second operating system to boot on the second partition (it wipes out the first). What software could I use that could help with dual-booting? --T H F S W (T · C · E) 21:20, 20 January 2011 (UTC)[reply]

GRUB: "(GNU GRand Unified Bootloader) is a boot loader package from the GNU Project." Note that some operating systems do not like to be installed on secondary-partitions. In my experience, it is easier to install Windows first (on the first primary partition) and to install Linux afterward. Allow Linux to install GRUB and overwrite the Windows-installed bootloader. GRUB will be the first thing you see when you boot up, and will let you choose which operating system to launch. If this doesn't work for you, you'll have to elaborate on your setup - you haven't even specified which operating system(s) you're installing. Nimur (talk) 21:53, 20 January 2011 (UTC)[reply]

You can get real-time help at irc://irc.freenode.net/linux ¦ Reisio (talk) 04:17, 22 January 2011 (UTC)[reply]

I partition using the disk manager in windows because I do not expect windows to 'play nice'. I figure if I use windows tools to make changes to windows I will keep it 'happy'. Once the open partition is there I put linux and a linux bootloader in the open partition. I prefer to leave the windows bootloader alone (I had some bad experiences before GRUB2 came out). By default linux will replace the windows bootloader. Usually this is changed just before the final step in linux installation (to the hard drive) through an 'advanced options'-like dialog box. There is a piece of software for windows that is freely available called EasyBCD. It can edit the windows bootloader to add entries for linux (and a few other options - which might be important since you never specified what OSs you are working with). I think this approach might be best described as creating a chain of boot loaders since the windows bootloader does not directly load linux, it loads the bootloader that linux installed to it's own partition and uses this bootloader (usually GRUB) to load linux. I have had no problems with just using GRUB2 to load windows and linux but the one time I tried this setup it was not intentional. Neosmart Technologies, in addition to offering EasyBCD, offers Windows Vista & 7 bootloader restoration disks which can be a lifesaver. Good luck. Mattbondy (talk) 03:20, 25 January 2011 (UTC)[reply]


January 21

Web search engines that allow wildcards?

I'm looking for a phrase but I don't know part of a word. Are there still any web search engines where when searching for e.g. "Eco* establish*" will give me hits for "Economics establishes" and "Ecological establishments"? -- Jeandré, 2011-01-21t11:43z

I'm not aware of any, but an alternative solution would be to do the stemming yourself and search for the results. There would only be around 180 things to search for in your example. --Sean 16:02, 21 January 2011 (UTC)[reply]
You might have some success with onelook, though it returns no hits for your example. It searches for phrases (in obscure dictionaries and encyclopedias and some other places that aren't specified) and allows wildcards. 81.131.65.219 (talk) 16:49, 21 January 2011 (UTC)[reply]
I suspect all the very big (whole web) search engines index on hash codes for performance reasons, so can't do prefix searches. Sean's suggestion sounds straightforward if a bit tedious, if you have just one or two queries like that. If you use a machine client you will probably get blocked by the search engine. 67.122.209.190 (talk) 09:01, 22 January 2011 (UTC)[reply]
Two or three years ago there was a very good website that told you all about the many different search engines and what they could do for the searcher, including things like wildcards. Unfortunately I cannot remember its name - does anyonme remember it? (It was not about "SEM", but about searching). Now everything except the top three search engines seem to have been forgotten by people, although some of them such as ask.com are still out there. (Would be intteresting to see a list of all working search engines). All I could find was this http://www.searchengineshowdown.com/ which is not the site I remember. 92.24.178.157 (talk) 23:17, 22 January 2011 (UTC)[reply]
This site has a section on wildcards: http://searchenginewatch.com/2155981. Tried your search in google - unfortunately, the first * is also standing in for words that come between eco* and establish*, so the results may not be what you want. Using the wildcard at the end of the final word seems to work.

Interference with Browsing

Every time I open a page in my IE8 browser I get Security questions: "A website wants to open web content" that are obviously wrong. The names given rotate among DivX Plus Web Player / McAfee Anti-spam / Microsoft Search Enhancement Pack / Java(TM) SE Runtime Environment 6. U... / Adobe Flash Player. The warnings just keep coming regardless of whether I click on Allow or Don't Allow. They make it hard to read Wikipedia! (Vista Home Premium) Cuddlyable3 (talk) 13:18, 21 January 2011 (UTC)[reply]

What Anti-Virus software do you have? Have you tried using Firefox? Firefox is a better browser and should solve your problems. Tofutwitch11 (TALK) 14:01, 21 January 2011 (UTC)[reply]
While I agree that Firefox is better than IE, better is somewhat subjective. And installing Firefox will not solve the problem the OP is having with IE. The OP may be on a restricted computer and cannot use Firefox, or may simply not want to use Firefox. 82.44.55.25 (talk) 22:37, 21 January 2011 (UTC)[reply]
OP here. My anti-virus is AVG free 9.0. Until this problem arose I have been satisfied with IE browser. Cuddlyable3 (talk) 11:38, 22 January 2011 (UTC)[reply]
Has this only recently started happening (you've been round here a long time and I haven't noticed you mentioning this before)? I have a very similar setup to you, with Vista Home Premium/IE8/AVG free 9.0, but don't have DivX Plus Web Player, McAfee Anti-spam or Microsoft Search Enhancement Pack; and I don't have this problem. Perhaps take a look at recent installations/updates or consider using system restore to take you PC back to a time before this was happening. Astronaut (talk) 14:19, 22 January 2011 (UTC)[reply]
The problem started yesterday. The only cause I can see is that I downloaded an update to DivX Player. It has added a button below YouTube videos but the warnings come before all sites (including Wikipedia). Cuddlyable3 (talk) 18:42, 22 January 2011 (UTC)[reply]
Do you actually need/want the DivX IE plugin? If not perhaps just try disabling it from within IE and see if it helps. P.S. While I use Firefox myself I agree with 82 that it's poor advice here. It sounds like the problem is probably with the DivX IE plugin and may not be the fault of IE. Using Firefox may resolve the issue. It also may not if you install the same plugin for Firefox and it has the same bug. Nil Einne (talk) 11:11, 23 January 2011 (UTC)[reply]
Resolved
 – Thank you Nil Einne. Cuddlyable3 (talk) 18:58, 24 January 2011 (UTC)[reply]

ncis

can you please tell me what make & model of mobile phone sasha alexandra ie( kate ) is using in the ncis show it appears to be a black oblong flip phone or pda with the screen on the top half & keypad on the lower half.i have tried to look for this on the net but car't seem to find a model that looks like the one kate is using. is there a web site that shows the gadgets used on the show

regards r a carrington.  —Preceding unsigned comment added by 80.2.45.163 (talk) 15:08, 21 January 2011 (UTC)[reply] 
That descriptions covers a multitude of phone models from a multitude of manufacturers — you'll likely have to have some high res shots or insider info to find out. If all you want is a phone _like_ that, just walk into a mobile phone shop, there will be plenty. ¦ Reisio (talk) 04:22, 22 January 2011 (UTC)[reply]
R.A. is apparently watching the show in reruns: the character of Kate was only in it for the first two seasons, in 2003–05. So this must be a phone that was available then, and given the rate of change in cellphone technology and fashions, it quite likely would not be available now. At least not new. --Anonymous, 05:30 UTC, January 22, 2011.
A couple of people seem to think it is a palm device, one states Treo smartphone - try looking at them from here Palm_(PDA)#List_of_PDA_models Chaosdruid (talk) 14:26, 23 January 2011 (UTC)[reply]

Vista - Permission to Put things in Folder I Just Made

Resolved

I have a bizarre problem on Vista. I have just made a folder on my desktop. Then I made a couple of other folders and put a few bits and pieces in them. Then I tried to put these other folders inside the first one, and I am told I need to grant permission (which is inevitably denied). I have succeeded in putting one of the folders into the first folder (granting permission which was accepted) but the other one won't be allowed in. The main folder was just made in the usual way (right click on desktop>new folder) and I haven't added any special properties to it. Can anyone help me unlock it? --KägeTorä - (影虎) (TALK) 15:31, 21 January 2011 (UTC)[reply]

No worries - everything had been set to 'Read Only' for some reason (I have an idea of why). Unchecking that fixed the problem. --KägeTorä - (影虎) (TALK) 15:52, 21 January 2011 (UTC)[reply]
Purists might say that the desktop is not the place to put folders (but just shortcuts to folders). Like you, I occasionally break this rule, but there are a couple of minor disadvantages in some situations. Dbfirs 18:03, 21 January 2011 (UTC)[reply]
Aye, well, I prefer to keep my desktop clear, but I just wanted to make a folder quickly and had nowhere else more appropriate to do it other than on my desktop, because my desktop was already 'open', as it where. Cheers, though. --KägeTorä - (影虎) (TALK) 18:37, 21 January 2011 (UTC)[reply]

Size of Wikipedia

Do the following two occasions occupy the same amount of Wikipedia server resources?

  • A Wikipedia page has been modified 4,999 times since its creation, therefore has had 5,000 versions including the current one, and the average size of its versions is a given number of bytes.
  • 5,000 Wikipedia pages have not been modified since they were created, and their average size is the same number of bytes.

To put it in another way: if one wished to calculate the total size of Wikipedia, would one get the actual answer by visiting every single history page and adding up the indicated numbers of bytes for each past and current version of each page? And could there be some estimation - even if it's a very rough one - about the (English) Wikipedia's total size? --Theurgist (talk) 16:00, 21 January 2011 (UTC)[reply]

If I understand your question correctly you're asking whether the MediaWiki software used by Wikipedia stores whole articles for each revision, or some kind of diff that would save space. The answer appears to be the former. Here is a database schema for MediaWiki. In the lower right is a table called TEXT which appears to hold complete articles for each revision. So the final answer is "yes" -- they would occupy roughly the same space (excepting the bits of metadata about revision times and so on).
As for the total size, it looks like it's around 5 TB for all revisions of all articles. Note that that doesn't include any images, which presumably take up a good chunk as well. --Sean 16:41, 21 January 2011 (UTC)[reply]
Previous versions of mediawiki kept a current version and backwards deltas (diffs). This was much more compact, but it meant that if you wanted to retrieve a version 10 revisions ago, you needed to retrieve and apply 9 sets of diffs. The current schema stores the full text of each revision in the old_text column of the text table. This reflects the realities of disk pricing vs cpu pricing. -- Finlay McWalterTalk 16:57, 21 January 2011 (UTC)[reply]
But see $wgCompressRevisions, though it sounds like it doesn't compress revisions against each other. Paul (Stansifer) 17:19, 21 January 2011 (UTC)[reply]
You wouldn't need to apply N diffs to get the Nth previous version. You could store every tenth revision in full and the rest as diffs from the nearest full revision. That would save an enormous amount of space and never require applying more than one diff. If you're going to use zlib to compress revisions, you could compress every tenth one with an empty initial dictionary and the rest with the nearest full revision (or the first 32K of it) as the initial dictionary. -- BenRG (talk) 21:53, 22 January 2011 (UTC)[reply]
To answer the last portion of the OP's question: a snapshot of the current-state of the English-language Wikipedia database, including all revisions of all pages, is estimated to be about 5 terabytes (see Wikipedia:Database download). To de-select all previous revisions of all pages would require a lot of database transaction (CPU time); such a service is not available from the Wikimedia Foundation's servers (so you'd have to download the full database and perform that data culling yourself). Periodically, the latest Pages/Articles are provided through this link in XHTML format. Those are a collection of BZ archives of XML, and appear to be about 10 or 20 gigabytes ("about a three or five DVDs" worth of data). The Special:Statistics page gives approximate counts of latest number of articles. Nimur (talk) 19:07, 21 January 2011 (UTC)[reply]
At the download link I gave they list pages-articles.xml.bz2, which is current pages only. About 6 GB uncompressed. --Sean 20:06, 21 January 2011 (UTC)[reply]
As a side note, version-control systems have exactly this time/space tradeoff, too, and they have quite sophisticated solutions. Here's a high-level discussion]. Paul (Stansifer) 05:51, 22 January 2011 (UTC)[reply]

Thanks everyone for the replies. But I now have of an additional question: do edit summaries also contribute to the size? I think they must take up the same amount of memory as ordinary readable text, but I can't be sure. --Theurgist (talk) 18:30, 22 January 2011 (UTC)[reply]

Of course -- they must be stored, after all -- but they're insignificant compared to the size of almost any article. --Sean 14:26, 24 January 2011 (UTC)[reply]

Command prompt

I accidentally opened a .bat script in Windows 7 which had a line instructing it to start a new instance of itself and loop, which created hundreds of command prompt windows within seconds all making new windows themselves. I couldn't stop it from task manager or process explorer with the "kill tree" command and had to shut the computer down and restart. In this situation, in case it happens again, is there any way to stop this once it starts without restarting windows? 82.44.55.25 (talk) 22:25, 21 January 2011 (UTC)[reply]

You have found yourself in a race condition with your computer; it's unlikely you'll win. What you would need to do is use Task Manager to kill the newest instance of the program before it replicates. It's possible, but unlikely, that you could do that. In some versions of Windows, tskill can be used to kill all instances of a process with a certain executable-name, so you could use the command line to execute that; but you'd still have a race condition between the two scripts. Nimur (talk) 23:03, 21 January 2011 (UTC)[reply]
Depending on the script and your skills, it's best to simply reboot. Alternatively, there are programs that kill all user processes, but in unless you know what you're doing, you're not going to win against the machine=P.Smallman12q (talk) 00:17, 22 January 2011 (UTC)[reply]
Either log off or better yet, if you have an unsaved document that will prevent logging off, simply try to log off, close all the batch file's windows (all of them), then cancel the log off attempt. It is possible to close all the windows when Windows is in the process of logging off or shutting down because when Windows is logging off or shutting down, no new process can be started (remember "... the window station is shutting down" error message?) 118.96.157.166 (talk) 00:37, 22 January 2011 (UTC)[reply]
You could also press the "Pause/Break" key on your keyboard. 216.120.192.143 (talk) 15:31, 24 January 2011 (UTC)[reply]

Personal information misuse

It seems that someone has mis-used my email address and craigslist advertisement has misused my email, AOL thinks they have resolved that issue, but now I can't connect to your system........... for example I have had calls telling me they are interested in a car I have for sale, I have never had a car for sale,,,,,,,,,,,,,do you have a phone number I can call and have you give me some assistance.......... (removed personal information) thank you ................. Roland A. Mireles —Preceding unsigned comment added by 76.218.123.7 (talk) 23:43, 21 January 2011 (UTC)[reply]

I've removed the personal information in the original post. Posting your email and phone number on a hugely public site like Wikipedia isn't going to help your situation at all. If you can find the Craigslist ad, you can flag it, and there is a Help link on the main page (at craigslist.org) that offers a "contact us" link, so you can write to them and request help. In the meantime, you probably should let your phone calls go to voicemail or an answering machine until the ad has run its course. --LarryMac | Talk 23:58, 21 January 2011 (UTC)[reply]


January 22

Ubuntu

How do I look at system information (CPU power, RAM, etc.) on Ubuntu? --T H F S W (T · C · E) 03:06, 22 January 2011 (UTC)[reply]

gnome-system-monitor, lshw-gtk, top, htop, cat /proc/cpuinfo, free, etc. ¦ Reisio (talk) 04:25, 22 January 2011 (UTC)[reply]
I'm afraid I don't quite understand what you mean. Are these online or in my desktop? One other thing, anyone know of a DVD driver for Ubuntu 10.10 ('Maverick Meerkat')? --T H F S W (T · C · E) 05:20, 22 January 2011 (UTC)[reply]
I'm guessing you type the command into a terminal window Nil Einne (talk) 05:48, 22 January 2011 (UTC)[reply]
thats right, you type that command in the command line terminal. unfortunately, with most linux distrobutions, you are still expected to do way too many simple things using the command line terminal, which most computer users nowadays have no knowledge or patience for whatsover. Roberto75780 (talk) 07:53, 22 January 2011 (UTC)[reply]
Wrong. It is merely simpler to tell someone what command to copy, paste, and run than telling them how to click around in menus for days as Windows users must. :p ¦ Reisio (talk) 19:30, 22 January 2011 (UTC)[reply]
I agree entirely. However, your answer wasn't "novice-friendly" because it did not specify that the user should use the terminal. The inclusion of three extra words ("Use the terminal:") would have made your otherwise flawless answer much more newbie-friendly. Keep in mind that not all users have extensive (or any) *nix experience). Nimur (talk) 19:41, 22 January 2011 (UTC)[reply]
I'm okay with people having to ask follow-up questions as he did. Many reasons to not go overboard on a first response. ¦ Reisio (talk) 20:07, 22 January 2011 (UTC)[reply]
That is also some of the beauty of the GNU/Linux world, IMHO. --Andreas Rejbrand (talk) 11:55, 22 January 2011 (UTC)[reply]
I must say, over the three years of using Linux right now, I have gotten to like and use the command line interface for things which I can also do from menus - playing music/videos, simple image conversion, software updates and so forth. IMHO it's far less hassle in a great majority of cases. --Ouro (blah blah) 21:02, 22 January 2011 (UTC)[reply]
Why not just use the 'System Monitor' application? On the top bar click System to get the drop down menu then 'Administration' then 'System Monitor' --Aspro (talk) 13:47, 22 January 2011 (UTC)[reply]
About your other question (about a DVD driver) - Ubuntu already ships with drivers to read and write CDs and DVDs. Beyond what, what do you want to do? VLC will play a DVD (as will Totem ("Movie Player"), although it will ask to install a special package to allow it to decrypt commercial DVDs). K3B and Brasero will burn a DVD. Bombono will let you author video DVDs. K9 copy will let you copy video DVDs and Thoggen will recode them for another destination like an iPod. All of these are either installed by default or can trivially be installed using Ubuntu Software Center or Synaptic. -- Finlay McWalterTalk 16:18, 22 January 2011 (UTC)[reply]
Easier method - install package hardinfo. You can do this by typing sudo apt-get install hardinfo into emulator, or by going to your system menu -> administration -> Update manager. Quick search for hardinfo. After you install this package, it will show up on your applications menu -> system tools -> System Profiler and Benchmark. Also, I further recommend you ask questions at http://www.ubuntuforums.org; they are much more knowledgeable about things Ubuntu. Magog the Ogre (talk) 19:41, 22 January 2011 (UTC)[reply]
I think you mean "package manager" not "update manager" - the latter only deals with updating existing packages with new ones for security/bug fixes. CS Miller (talk) 20:28, 22 January 2011 (UTC)[reply]
OK, I got VLC. It's not working though. I put the DVD in and hit play, and it just gives the filename of the DVD for a second and ten goes blank. --T H F S W (T · C · E) 23:42, 22 January 2011 (UTC)[reply]
hardinfo is really cool. Where has it been my whole life? Paul (Stansifer) 15:54, 24 January 2011 (UTC)[reply]

wget -- downloading to a directory and renaming a file

I can't quite figure out how to do something with wget in OS X. I would like to be able to download the index file of a site to a specific directory and rename it with the current date. For example:

wget http://en.wikipedia.org gives me the index.html file in my home directory, but I'd like this instead: /wiki-archive/2011-01-22-wikipedia-index.html.

Any help would be much appreciated. --CGPGrey (talk) 12:28, 22 January 2011 (UTC)[reply]

"-P" or "--directory-prefix" will allow you to set the directory, and "-O" or "--output-document" allows you to set the filename. There is more info in the wget manual. To set the filename to the current date you'll have to work out how to print the data in your OS. I have never used a Mac so I don't know how to do that, but if it helps you work a solution to do it in .bat on Windows would be "%date:~-4,4%-%date:~-7,2%-%date:~0,2%-%time:~-11,2%%time:~-8,2%.html" 82.44.55.25 (talk) 12:40, 22 January 2011 (UTC)[reply]
Thanks for pointing me in the right direction. I've found out that the answer is:
wget -O "/path/to/file/`date +"%Y-%m-%d --"` archive.html" http://en.wikipedia.org --CGPGrey (talk) 17:49, 22 January 2011 (UTC)[reply]

New components for an older PC

Hello. I'm about to buy some new components to revive a ~6 years old business PC for multimedia and gaming use. However, there are a few things that are confusing me.

First the PSU issue. I have no idea what kind of PSU is installed. The label on the side does say it's "ATE-OK". Is ATE the same as ATX? In other words, can I be as sure as usual that an ATX PSU will work? Also, most of the PSUs I find in the Internet only mention the amount of SATA connectors. Since some of the hardware is rather old, I'm not sure whether the new PSU will have the required connectors for them as well. I'm no expert on connectors, and I'm not going to query about the amounts of every different type (especially 'cause I don't know them all) but is there some specific connector that's fallen out of use and may be unavailable on newer PSUs?

And then, the graphics card. I'm thinking about Nvidia Geforce GTS 450, but on the technical specs on the site it says that it needs "two free card slots". I reckon it means that it needs space for air circulation, am I right about this?

Thank you all! 88.112.51.212 (talk) 13:14, 22 January 2011 (UTC)[reply]

I would be careful here. I believe one of the major drivers of motherboard changes have been the ever increasing demands of graphics cards. In my experience, the motherboard in an old PC places severe limitations on which new graphics cards are still compatible with it. And unless you intend replacing the CPU and clock, I would imagine the gaming performance would suck, even if you manage to get a graphics card that will work with the motherboard. Astronaut (talk) 13:56, 22 January 2011 (UTC)[reply]
Actually the Cpu is fine, surprisingly. I guess the computer was built for computing performance with average graphics back then. My Mobo has the required PCI-E slot, so it should fit in, but if I've understood correctly there are different types of PCI-E slots as well, right? 88.112.51.212 (talk) 14:25, 22 January 2011 (UTC)[reply]
(EC) Not really true on the motherboard point. If you the motherboard has a PCI Express x16 slot, then most graphics cards should work (in theory all should but I would never rule out compatibility problems). It's likely to only be 1.0e but unless it's an extremely high end card (and even then I'm not sure whether they've really reached a level that it's starting to become a bottleneck) it's not likely to make much of a difference. See for example this showing a GTX480 running on with 4 and 8 PCI express 2.0 lanes (8 lanes is equivalent to 16 lanes for PCI express 1.0) [2]. I would however be more concerned about the PSU and also concur the OP may want to consider whether the other new components will be enough to ensure the graphics card isn't overpowered compared to the rest of their system. ATE OK is just some sort of testing label. If that's the only label on your PSU (i.e. no power specs, not even a brand name), I wouldn't trust it with your current system let alone any upgrade. Oh and I forgot to mention it's correct if the card says it's two slots that mean it needs an additional slot for the HSF, this slot will have to be to the right of the card if the part of the card that plugs into the slot is on the ground and monitor etc connections on the card is facing towards you (look at an image of the card to get an idea). It obviously doesn't matter what sort of connection if any is present in the slot on your motherboard provided you don't need to use it. You may also want to check cleareances at the end opposite of the monitor connectors if it's a long card. Nil Einne (talk) 14:30, 22 January 2011 (UTC)[reply]
Thanks. The card isn't exactly high-end, and it won't be too overpowered, as my current Intel Integrated Graphics Card is the limiting factor of my gaming performance ATM~(I've already upgraded the RAM, the processor is adequate and performance is good on non-graphic benchmarks). The PSU does have some labels, mostly in Chinese though. I couldn't find anything of interest apart from the manufacturer (LITEON) and some power statistics (300W total output peak, it says). So, I was right about the cooling system. My current card layout is as follows: Network card on top (PCI I think), below it two empty slots that look similar to the PCI-E although very short, and then the PCI-E. I'm not planning to use these shorty slots so the card'll probably fit right in. Thanks for your help, and if you have further pointers, I'll be listening! 88.112.51.212 (talk) 16:27, 22 January 2011 (UTC)[reply]
I agree the graphics card isn't highend and the Intel integrated card is a very major bottleneck for gaming on the other hand the GTS450 is I think what I'd call a high mid so I don't know if I'd use it e.g. on a weak dual core with 2GB of RAM. (Not to say I wouldn't upgrade but I might not go that high.) However I appreciate this a complex equation since it depends e.g. whether you're likely to keep the card with any upgrades and also on pricing since even if the 450 is bottlenecked in some games by the rest of the system it may still be worth it price/performance wise. However this isn't something I've looked in to for a while and I don't game much nowadays so this is just a general explanation of what I was thinking, you appear to have looked in to it and I'm not trying to change your mind.
The short PCI-E slots are probably 1x (1 lane) PCIe slots. LiteOn is a known brand, so the PSU isn't as terrible as I thought. Also if you live and bought your PSU in Finland, I've heard before e.g. [3] that the Finnish regulators are fairly proactive in keeping the worst of the worst PSUs out of the market. (This compares to NZ where while we have decent consumer protection laws most computer stuff is still not really something the regulators ever do much about.) Still 300W is likely pushing it, may be doable depending on your system with a excellent PSU but this might not be yours (particularly not since it's 300W peak). From a quick search I think GTS450 cards need a 6 pin PCI express power connector. Some cards come with adapters for molex, these would be okay with some decently speced but old PSUs but I wouldn't recommend them for yours.
So yeah, I'd recommend a new PSU. At a random guess I'd say anything 380W and over should be more then enough, I'd concentrate more on a decent PSU then getting more. (I've spoken on this at length in the RD before, don't trust any claims that you need outrageously high power for a system, most of these are based on power usage figures that just add the maximum possible for every component despite the fact this never happens and also the assumption you never want to get close to the maximum for a PSU since with crap ones they tend to die. [4] is a good place to get an idea of real world PSU requirements.) It's difficult to recommend brands since these depend on where you live but something Enermax, Corsair, OCZ, Seasonic, Antec and a few brands I probably can't remember off the top of my head are usually good bets. I'd also consider something that's 80 PLUS certified since even if you don't care about the energy savings it's ensure some minimum quality (particularly nowadays when so many have it).
There hasn't been any major changes in connectors, your motherboard may only have 20 pin ATX but you'll normally have no problems plugging the 24 pin ATX in to the 20 pin motherboard slot (it's keyed so you can't do it wrong) and in any case many PSUs still come either with an adapter to convert the 24 pin to 20 pin or the 4 extra pins can be disconnected from the 20 pin to give a 20 pin ATX connector. Also many PSUs will now have an 8 pin +12V motherboard connector, your motherboard will probably only have 4 pin if any but again this shouldn't be an issue. A modern PSU will also likely have more SATA and less molex but unless you have a ton of drives or whatever with molex power connections this shouldn't be an issue (and there's always Y cables). There will probably be few (1 if any) FDD connectors but you can always use an adapter and I'm not sure you'll be using one anyway. Modern PSUs don't have -5V but this has been the case for a long time so I doubt your motherboard uses -5V. (On very old systems some particularly high speced modern PSUs sometimes have problems starting because of insufficient loading particularly 12V which wasn't used so much in the past but this much older then your system.)
As I hinted above PCIexpress is supposed to be backwards and forwards compatible so you should have no problem with it working in theory. I have heard of compatibility problems before although I think this was with forward compatibility (using a PCI express 1.0 card in a PCI express 2.0 slot) and was a while ago. I'd never rule them out since poorly designed systems or sometimes even flawed specs can allow problems. (I know of a case where a Maxis PATA HDD didn't work with a certain motherboard, not sure why.) Despite what you may read in some confused forums and articles PCI express 2.0 slots provide the same amount of maximum power PCI express 1.0 slots could, 75W so it definitely shouldn't happen. I have heard of motherboards that couldn't provide enough power for certain AMD CPUs even though they were only using what had already been in the spec but the PCI express thing is even more strange since there have been cards needing extra connectors since the beginning which I would have thought drawed close to 75W from the slot. Without knowing the details below, one possibility is it's the PSU's fault (which also should never happen even if you draw too much according to the ATX standards, it should just shut down). I believe Finland has good consumer protection laws so if you do live and bought the components there I'd expect you'd be entitled to have any damaged components replaced if it does happen, whatever is at fault and whenever you bought them since it would likely be a design defect (unrelated to wear and tear or whatever). However that could be more trouble then it's worth. (I'm pretty sure both are the case in NZ.)
Nil Einne (talk) 13:51, 23 January 2011 (UTC)[reply]
http://freewarehome.com Click on the sub-menus for systemutilities/systemtools/systeminformation. System Information For Windows and some other software on that site should tell you more about your computer. I would be very careful - I installed a graphics card that the internet told me was compatible, and it burnt out my motherboard and the card. 92.24.182.196 (talk) 21:00, 22 January 2011 (UTC)[reply]
Haven't got much to lose. The current hardware cost me less than 300€ and the new components I'm planning will cost even less. As for the system information software, thanks, but I've already got one for Linux. It's probably not the best available, but will do. 88.112.51.212 (talk) 11:21, 23 January 2011 (UTC)[reply]
I think 92 is suggesting he/she can provide better advice if you provide more info on your system, whatever tool you use. Nil Einne (talk) 13:55, 23 January 2011 (UTC)[reply]
Well, just for closure, I got the components and they work perfectly. Now, if only I could get Civ V work on the good ol' Nuxy... thanks everyone! 212.68.15.66 (talk) 05:59, 24 January 2011 (UTC)[reply]

Caps lock

Almost every day, either here on the ref desks or on the help desk, there is a question posted in all caps. The questioner is usually new to Wikipedia, or has a few other edits (where interestingly they didn't use all caps). It seems an odd choice to me, so why choose to write in all caps? Certainly, when told that it can be interpreted as shouting, there is no follow-up from the OP saying "sorry, I thought that was how questions are supposed to be asked", or "sorry, I got gum stuck on my keyboard", or some such. Astronaut (talk) 17:32, 22 January 2011 (UTC)[reply]

Well, I don't know exactly why individuals do this, but I will note that with typewriters, it was not uncommon to type in all caps, and was virtually unheard of to type in no caps. That's switched a bit with computers. --Mr.98 (talk) 17:51, 22 January 2011 (UTC)[reply]
Previous answers to this question you (see archive) have suggested it comes from the 8-bit era, when home computers typed capital letters by default and would fail to understand commands issued in lower case. (If I remember rightly, that's my previous answer to this question, but somebody else said something similar.) Capitals are also recommended for clarity on official forms: so one reason may be a general nervousness about being misunderstood or ignored. I also postulate that it's a cultural thing, and that there are social networks and forums somewhere out there where everybody types in CAPS all the time. 213.122.5.253 (talk) 18:05, 22 January 2011 (UTC)[reply]
8 bit era? Thay spel asif thay is still in seventh grade! I bet their moms' can't even remember the Monkeys.--Aspro (talk) 18:30, 22 January 2011 (UTC)[reply]
when i first started using computers often at home, i used to do that. there's two main reasons actually. first off, to many people, it looks neater. maybe not when contrasted with normal text, but when you use it exlusively, it can look neater. the other reason is out of laziness, so you dont have to use the shift key to capitalize, and still not have capitlization "mistakes" such a "i" and "england" etc. and then i realized that the missing capitalization mistakes do not annoy people nearly as much as the excessive emphasis/screaming perception when it is contrasted with other entries that are not all in caps. Roberto75780 (talk) 18:52, 22 January 2011 (UTC)[reply]
The equation of capital letters to shouting is not a universally accepted tenet, either. Sometimes, caps-letters do convey extra emphasis; sometimes this formatting conveys no information at all. Sometimes, it conveys metadata. Most often, I think, ALLCAPS is an indication that a human or a machine is designating some text that should be interpreted by both humans and machines. (Machine-consumption-only data is transferred in binary; human-consumption-only data is transferred in nicely formatted sentences). Here are some examples: print editions of many newspapers still use ALLCAPS for the first line (or for the date and location line) of news stories. Print and electronic versions of ALMARS messages are always printed in ALLCAPS, including things that are not being shouted (such as this UNITED STATES MARINE CORPS BIRTHDAY MESSAGE). (INSERT JOKE ABOUT HOW MARINES YELL EVERYTHING). Many NOAA data products and other messages are decoded to ALLCAPS text products like this CHICAGO AREA HRR. Nobody presumes that the HOURLY WEATHER ROUNDUP REPORT is being "yelled."
Using fewer characters eliminates a degree of freedom in text equivalence matching. This simplifies programs that must parse these data formats; so at a cost of "mild" inconvenience to human-readability, it greatly simplifies machine-readability. Even in this era of smart programmatic text-parsers that can ignore letter-case in multiple languages, there is still a great deal of additional complexity when crossing human-language boundaries (not to mention character encodings). Restricting input formats to only 26 ASCII characters, plus a few numerals and punctuation symbols, dramatically simplifies parsing and search. Nimur (talk) 19:00, 22 January 2011 (UTC)[reply]
I always take it as an attempt to state, "FORGET EVERYTHING THAT YOU MAY THINK IS OF ANY IMPORTANCE IN YOUR LIFE AND MAKE MY QUESTION THE MAIN PURPOSE OF YOUR ENTIRE EXISTENCE!!!" So, yes, it is basically shouting. -- kainaw 01:12, 23 January 2011 (UTC)[reply]
I've noticed that a lot of the all-caps posters are from places that use other alphabets. Devanagari, for example, does not have two letter cases, so it's easy to understand posters unfamiliar with English conventions to not see much importance in which style to use. --Sean 14:38, 24 January 2011 (UTC)[reply]
Single-case people might prefer all caps to all lowercase for correctness reasons. We fill out crossword puzzles in all caps, for example, because "usa" or "tuvalu" are considered incorrect, but "BOOGIE" and "TEPEE" are not incorrect. Paul (Stansifer)
I'd always assumed it was old people not familiar with computer use and etiquette. Like someone else mentioned, if you're using a typewriter for an informal note, and you're not a good typist, it's pretty common to leave the caps lock on and write your note in block letters. (With modern fonts, it looks a lot better for informal notes to be all lower, but that's not as true with a typewriter font.) APL (talk) 06:31, 25 January 2011 (UTC)[reply]

Smart phone prices

Why do brand-name smartphones have disproportionately higher prices than similar brand-name media devices that lack phone functionality? A great example is the iPhone. The out right purchase price for the latest, basic capacity iPhone purchased outright with no contract, where available, and officially unlocked, where available, has always been about TRIPPLE the price of a similar iPod Touch. the main difference between the two is phone and 2G/3G data functionality, as well as GPS. sure it may also include a better processor and RAM or other components, but tripple the price? really? Now i know the case may be a little different with the iPhone's prestigue and incredibly high demand, but you can compare offerings from other manufacturers (even comparing models from different brands) and you will find high-end smartphones in the U.S. and canada generally cost $400-$650 to purchase outright, not subsidized by a contract. meanwhile, high-end media players (and in the past, PDAs) that include wifi, bluetooth, similar storage and connectivity, and sometimes even integrated gps, pretty much everything is the same except no cell phone network capability will normally cost about 1/3 to 1/2 the price. sometimes, even the operating system or software is the same (Android, windows, palm etc). especially with android now being a comprehinsive software package that cost nothing to license, the development cost for both these catagories can be significantly lowered, yet the phones remain 2 to 3 times more expensive that very very comperable devices without a phone radio. oh, and you really cant say that the phone radio itself contributes to this cost, because other cheap phones for well under $100 often have the same radio, even 3G data capable. Roberto75780 (talk) 19:08, 22 January 2011 (UTC)[reply]

Because in commodity consumer electronics, the "free market" has determined that technical specs are among the lowest priority factor during purchase decisions. Peruse this Google Book search result list. As a relevant comparison, peruse some commodity personal-computer websites - as of 2011, I had to click through three or four pages on each major-brand's website before I even got to see the CPU type. Consumers want colorful, fashionable, brand-identifiable products. Businesses cater to that market demand, and set prices accordingly. Arguably, this is the most intelligent way that major brand consumer electronics companies can fight back against commodification. Nimur (talk) 19:20, 22 January 2011 (UTC)[reply]

Eh, mentioning Apple products is starting off wrong, as they have consistently been twice the cost for ages now. You can get smart phones for $100, I just got one last week. The more popular the phone, the more units sold, the more money made, the happier they are to lower the cost. ¦ Reisio (talk) 22:19, 22 January 2011 (UTC)[reply]

Prestige, demand, cost to produce, cost to innovate (you're paying for R&D in some cases), cost of advertising... all of these things factor into the price points. --Mr.98 (talk) 23:49, 22 January 2011 (UTC)[reply]

WiMAX and competing 4G phone service

3G phones that work on UMTS (also known as WCMDA) are always capable of "falling back" to 2G or 2.5G GSM service when 3G networks are not available in the area. My question is, do the newest 4G phones "fall back" to GSM as well? can they first "fall back" to 3G signals if available in the area? I'm mainly interested in the WiMAX type, but any competing (truly 4G) standards are also of interest. Roberto75780 (talk) 19:22, 22 January 2011 (UTC)[reply]

January 23

NFC and RFID

As I understand it near field communication is a relatively recent technology that allows devices such as mobile phones to function as both RFID readers and RFID transmitters. I'm a little unclear as to what this means in practice though. For example, does that mean that an NFC equipped phone could query a typical RFID smart card or keyless fob, record the signal it generates, and then duplicate it on demand for the associated keyless entry system? At present RFID keys tend to be relatively hard to duplicate (not that many people have the technology and know how), but it would seem that NFC will make duplicating RFID keys almost trivial. (So easy in fact that people could do it without the key's owner ever losing control of the key.) Is that a correct understanding of the technology? Dragons flight (talk) 00:53, 23 January 2011 (UTC)[reply]

Yes, all of the assumptions you made are infact true in principle. I don't know how this will play out in practice. Wallets with wire mesh pockets that sheild radio waves have been marketed (mostly online) for a few years now for this exact reason, to protect the RFID in bank cards, key fobs and so on from being read (and hence duplicated) without the knowledge of the owner. Roberto75780 (talk) 04:55, 23 January 2011 (UTC)[reply]

Presenting numbers in descending order in VB list box

My task is to make a form demonstrating various loops (For...Next, Do While, Do Until etc) by having two inputted numbers (lower bound and upper bound), so that even numbers go into an even list box and odd numbers go into the odd list box. As well, if the lower bound number is higher than the higher bound number - the list boxes are to present the numbers in descending order. I have had success with the For Next loop (as I used the "step" command) but I am having difficulty with the Do While loop. I can not seem to make the numbers present themselves in descending order. What should I be trying to employee here?24.89.210.71 (talk) 01:20, 23 January 2011 (UTC)[reply]

To present numbers in descending order you have to start at the highest number and subtract from that each time round the loop. To get a loop working correctly so it only outputs the numbers it should, you need to carefully consider how big a step is needed on each iteration and how each type of loop decides whether or not to go round again. Your teacher should have described how each loop works and particularly should have mentioned when each loop does its test on whether or not to continue. See For loop, Do while loop and While loop for more info. Astronaut (talk) 01:39, 23 January 2011 (UTC)[reply]
Do/While and For/Next loops have different properties. Just for example, here are two such loops in VB which do the same thing:
i = 5
Do while i > 0
print i
i = i - 1
Loop

For i = 5 to 1 Step -1
print i
Next i
Both of these will have the same output. In the For/Next loop, I change how the value iterates using Step, and set my bounds in the For statement itself. For the Do/While loop, I set the starting bound of the variable explicitly in the code, then I set an end condition, and then the iteration is done in the code itself (the i = i-1 part). Make sense? --Mr.98 (talk) 02:21, 23 January 2011 (UTC)[reply]

I appreciate that you have taken the time to reply and offer these suggestions. My code for my For Next Loop resembles yours and I am able to use "Step" as you have indicated. In the For Next Loop, I can just alter the "Step" direction (positive or negative) as required. My problem lies with the Do/While button as I can not seem to figure out how to incorporate direction. My code again resembles yours but since they are going to a list box - they are defaulting to display "ascending only" and I need the occasion for them to be displayed descending which seems to be eluding me24.89.210.71 (talk) 03:00, 23 January 2011 (UTC)[reply]

Is it possible the list box has its sorted property set to true? Astronaut (talk) 04:00, 23 January 2011 (UTC)[reply]
An easy way to test whether it is a problem with the List box or with the code is to have the list code instead output somewhere else — e.g., append the numbers to a string and then use Msgbox to show the string. It'll show if your problem is with the loop or the box. Otherwise, without seeing your code, there's not really any way for us to tell what is wrong. --Mr.98 (talk) 17:16, 23 January 2011 (UTC)[reply]

Future 128-Core Processor

Hello,

  I'm wondering whether in the near (or not so near) future, it may be possible to make a 128-core processor. I mean a processor with 128 CPU cores, not GPU cores or just ALUs. I envision 128 cores on a single die, arranged in a 12x12 square, with a 4x4 vacancy in the middle. Could this be possible, and any thoughts on how many years it might be before such processors come along?

This is really more of a theoretical discussion than a question.

  Thanks to everyone. Rocketshiporion 02:48, 23 January 2011 (UTC)[reply]

Intel labs designed a 128 core chip in 2007 http://gizmodo.com/239761/intel-develops-128+core-terascale-superchip . But putting lots of cores on a die doesn't itself do you much good. The problem with all multi-core operation is contention for inter-processor (and processor-memory) connections. It's initially tempting to build a connection from each processor to all the others (thats a complete graph) but when you have 128 nodes, which would need over 8,000 full bandwidth interconnects (and the entire die surface would be nothing but interconnects). So designers use a crossbar switch (or the like) to manage the interconnections : but this means that there is contention for the switch, and the more cores you have the more contention. Depending on the task, you can easily get so almost all of each core's time is spent waiting on the crossbar (and so there's no point in having so many cores). An attempt to mitigate this is non-uniform memory; that's where each core has some memory of its own, and it only goes to the crossbar to access global memory or to talk to other cores. The IBM Cell processor (that drives a PS3) has non-local memory, as (in different dies) does IBM's NUMA-Q architecture. But this leads to the second problem: writing programs for these things. Writing programs that efficiently make use of limited local memory and access non-local memory and other cpus over the crossbar efficiently is, in general, beyond the current state of human programmers, compilers, and run-time systems. A few tasks that are intrinsically and obviously parallelisable (like media coding and image processing) can be automatically divided up. But for general computing (and that's what you use a CPU for) it's not really possible (and to the extent that it is, not really worthwhile). There's some hope in implicitly parallelisable programming models (like Parallel Haskell), but even then filling 128 cores properly is a tall order. This is the real worry for those who hope CPU performance will continue to increase - clock speeds really haven't advanced much in several years, with effort instead going into multiple cores. But as you add cores you lose efficiency in the organisation of the distributed task, until you reach a point where adding more cores does no good at all. For most tasks, that's way before you get to 128. 87.113.206.61 (talk) 03:15, 23 January 2011 (UTC)[reply]
If you're thinking "well, servers already have lots of cores, albeit in different CPUs", you'd be right (Sun, HP and IBM machines scale up to 512 cores or so). But servers run hundreds of concurrent, unrelated operations (like webserver queries) which don't contend (much) with each other. When they do (when one thread in a database server has to write to an index that 100 other threads are using) performance crashes. Lot of the work in developing an industrial-scale database server is minimising the circumstances where, and consequences of, such contention. Similarly supercomputers and distributed compute clusters (like beowulf) have thousands of cores, and potentially suffer from all the same problems above. The stuff people typically run on supercomputers (things like finite element analysis) are highly parallelisable; they're not the general problems most people want their computer to solve. 87.113.206.61 (talk) 03:29, 23 January 2011 (UTC)[reply]
As mentioned before, there are already research processors with 128 cores. For a commercial example, I don't think there are any processors with 128 cores, but Tilera already has the 100-core TILE-Gx100. The Gx100's cores are arranged in a 10 by 10 array, and a mesh network connects them together. But unlike the Teraflops Research Chip, the Gx100 has no floating-point units, the cores implement a 64-bit, 3-way VLIW, integer-only architecture. If I am not mistaken, Tilera is to introduce its next generation TILE processors this year, and these will have some 120 to 130 cores. If you want to look at more specialized processors, I believe there are embedded processors with hundreds of 8- or 16-bit cores for digital signal processing, but these processors have cores that are very limited. Rilak (talk) 03:50, 23 January 2011 (UTC)[reply]
The Tilera processors' cores don't appear to be full-function CPUs. I was getting at a 128-core processor where each core has atleast as much functionality as a Nehalem or Westmere core, and the 128-core processor is a computer's primary processor. The Tilera cores seem to either for use in computing appliances or for co-processing. Rocketshiporion 10:54, 23 January 2011 (UTC)[reply]
It may depend on whether you want to keep the i386 architecture or not. It's certainly possible to make a CPU core with few transistors: they did it back in the old days. In my (very limited) understanding, there are a few reasons why we have so many millions of transistors in modern processors:
  • parallel execution: the main way that processors have been getting faster is by executing multiple instructions at once in a pipeline. The trouble with this approach is that the programmer (well, the compiler) needs to get the same results as if the instructions were executed in order. The fact that this process is running out of steam is the reason we're adding cores in the first place. If you have 128 cores and you've solved the cache coherency problem, the shared memory bottleneck, and figured out how to write parallel programs, you don't need this anymore.
  • architecture cruft: the i386 architecture is backwards-compatible to before many programmers were born. Furthermore, it's a CISC processor; a RISC processor would be simpler. I've heard it said that a modern Intel CPU is basically a RISC processor underneath a translation layer. I don't know how true this is.
  • fancy instructions: the i386 also has a lot of fancy features for doing certain specific operations fast. Does a processor without the XOP instruction set or 3DNow! count as "full-function"?
    • including floating point math: division is hard. If you don't need it, or not all of your cores need it, so you can actually use all 128 cores, you can save a lot of space.
So, it depends. Paul (Stansifer) 19:25, 23 January 2011 (UTC)[reply]
Here's a picture of the Pentium III die layout (warning: 6MB TIFF file). I found it on this page, which has a lot of other interesting technical information about Intel CPUs. On-die cache is the biggest thing you forgot about. In addition to the data cache and L2 cache, I assume the region marked "instruction fetch" is mostly cache. Out-of-order execution (which isn't the same thing as pipelining) also takes a lot of die area. The execution units that do all the number crunching are a surprisingly small part of the total area. It's more or less true that x86 CPUs (Intel's and AMD's) decode the CISC instructions to RISC-like instructions, which are called micro-ops.
there are a few reasons why we have so many millions of transistors in modern processors - actually, the main reason is that we can, and that it's cheap. We have been, for quite while, been beyond "best performance per transistor". However, adding more transistors is very cheap, and so designers are willing to go even after small improvements. Paul's list contains some examples of the things people try. They are all good points, but both backwards compatibility and fancy instructions do not play a large role anymore - instruction decoding is only a small part of the logic nowadays, and backwards compatibility costs at most embedding an ancient chip somewhere in your modern core. By sheer number of transistors, cache is the biggest contributor, because it has a high impact, and is easy to design. --Stephan Schulz (talk) 12:53, 24 January 2011 (UTC)[reply]
By full-function, I intended a CPU core in which the full x86-64 Instruction Set is implemented. That includes both integer and floating-point operations.
@87.113.206.61 - A 128-core processor would need 16,256 full-bandwidth interconnects, but covering the entire die with interconnects should not be a problem, if the interconnects were layered on top of the grid of cores, which would essentially be a double-decker processor.
Rocketshiporion 00:05, 24 January 2011 (UTC)[reply]
Each core in the TILE-Gx100 is a processor — in the sense that each fetches and executes its own independent instruction stream. I think I have misunderstood what a core is in this discussion. The Tilera processors are also not coprocessors or application-specific, Quanta Computer will ship later this year servers with the TILEpro-64 running Linux for cloud computing applications. If you got the impression from Tilera's solutions page that TILE processors are coprocessors or implication specific, that is just one potential application. Processors have in the past been used as coprocessors, SGI in the 1990s used Intel i860s as geometry accelerators for graphics hardware. The same processor could be used stand alone. I'm sorry for digressing, I just thought I should clarify the Tilera's situation.
About the number of links a 128-core processor might need. What sort of network should such a processor have? If each core was linked to every other core by a one bit, bi-directional, 5 GHz link, then 16,384 wires are needed. Such a link would have a peak bandwidth of 625 MB/s. Since these sort of links usually do not have additional wires to implement the protocol that controls data movement, the same wire needs to carry protocol information, reducing usable bandwidth. One can get more bandwidth from today's consumer network hardware than such a link. For better performance, the number of wires per a link will have to be increased, to 16, for example. Then 32,768 wires are needed.
About covering the cores with these links — my (poor) understanding of VLSI design and technology says that this is a problem. To have a complex, high-clock rate Nehalem-class core, having plenty of interconnect levels is very important to ensure that the connections between the transistors can be made and kept short. My (poor) understanding is that process limitations mean that the interconnect levels closest to the substrate are the ones with the finest widths, and are therefore the most suitable for local routing because of its wide and resulting speed (as they have low capacitance). Simply having the links placed above the cores means that the interconnect levels with the thicker wires will have to be used. These are going to be power-hungry, not so fast (especially over long distances such as from one side of a die to another), and will be difficult to route as every core needs a link to every other core. Any one of these characteristics will make such a scheme impossible. Rilak (talk) 04:44, 24 January 2011 (UTC)[reply]
I did a bit of research on the Teraflops Research Chip, and it turns out that it only has 80 simple cores, not 128, and each core has two FPUs, so it would not meet any of your requirements. Also, I found out that the Tilera processor with more than 100 cores, it is actually due for 2013, not this year. My bad. Rilak (talk) 05:36, 24 January 2011 (UTC)[reply]
Larrabee cores support the full x86-64 instruction set. Apparently they were planning 48-core chips by 2010, but things haven't gone according to plan. -- BenRG (talk) 07:43, 24 January 2011 (UTC)[reply]

What is the Typical IDE - and "virtual WIndows or Linux environment" - used to develop application code for Android-based mobile phones or iPhones?

I'm not familiar with how developers simulate the mobile phone environment for Android-based phones or iPhone to develop application: I'm trying to understand the "most commonly used" IDEs and "virtual environments" that developers would use to simulate the Mobile phone environment (for Android phones and for iPhones)to develop and test code for applications. Is there some commonly used Open-source systems for this? What about the most commonly used commercial SW systems used for this? —Preceding unsigned comment added by 71.198.81.56 (talk) 04:06, 23 January 2011 (UTC)[reply]

Google and Apple each publish an SDK that includes a phone emulator that can run software designed for the phone. The SDK and emulator are not open source and are specific to that mobile operating system (Android, iPhone, etc). Third party emulators are also available but they often not used by application developers Roberto75780 (talk) 04:49, 23 January 2011 (UTC)[reply]

For Android the SDK is available here[5] for Windows, Linux, or OS X. It can be run from the command line without an IDE, but plug-ins for the Eclipse IDE are available from Google.
For iPhone (iOS) you would use a Mac with OS X running the Xcode IDE, and write code in Objective C.[6]. More information is at iOS SDK. You have to pay Apple for the SDK, while the Android tools are free (though not open source). --Colapeninsula (talk) 11:08, 24 January 2011 (UTC)[reply]

Starting a program, wait a defined time period and then gracefully close in Windows

Resolved

Hi, while I can probably work this out myself I've tried quite a lot of searches and a few things and it seems a bit pointless spending a long time on something which is hopefully trivial for some here.

I'm looking for a way in Windows to start a program, wait a defined time period and then gracefully shut down the program (added bonus if it forcefully terminates if necessary after a defined time period). The exe is Chrome although Opera would probably be okay. (FF and IE take too long to start although I actually found something which works for IE since it's better integrated with VBS).

I can do the starting and waiting fine, my main problem is working out how to shut down. I could use taskkill but it seems there should be a way of handling it in the script since it started the program rather then relying on additional external programs. And even better bonus is if Chrome can be started minimised or in the background (and stay that way).

While I'm using VBS partially because I never installed any other scripting programs on this computer (as you may guess I don't script much), I'd welcome to code for anything else provided it will work on Windows without too much of an effort and is a language that isn't too hard to understand.

If you're wondering, I want to open a set of URLs one by one. I.E. quiting after waiting for an URL to load (because of SWF used in what I'm doing I'm not sure it will be easy to work out when the page has finished 'loading' so just waiting a defined period is okay) then opening Chrome again with the next URL (it will end up being a lot of URLs so loading them all simulatenously is out of the question). I believe I can work this part out myself so that isn't the issue. Of course I don't have to quit but it seemed the best and easiest bet to keep everything clean and Chrome loads fast. (I looked for an addon which could open a list of URLs one by one in the same window waiting a defined period between loading a new one but eventually gave up.)

P.S. Something like wget won't work because of the need to understand SWF. P.P.S. Going by the window name won't work because this can change.

Thanks Nil Einne (talk) 12:51, 23 January 2011 (UTC)[reply]

While I imagine there's likely to be a more graceful way of doing this (I'm not much of a programmer), have you considered an automation scripting program like AutoIt to perform this task? -Amordea (talk) 16:30, 23 January 2011 (UTC)[reply]
Thanks for the suggestion I did consider AutoIt for this briefly but wasn't that sure if it could do what I wanted. I was thinking I'd probably find some way to get the Chrome window to start minimised so this happens in the background and while this may still be possible in AutoIt (well worst case you can use shortcut keys I guess or probably some other program) I wasn't that sure since it's been a while since I used it (actually it's not even installed on this computer, downloaded a few weeks ago for something else but didn't end up using it). However I eventually decided just to try taskkill with VBS and while this worked, from my tests I realised what I was doing wasn't going to work as well as I hope. So I've decided to use the jython based Sikuli [7] (which I've been using for a related project) and stick with doing it in the foreground and using the graphical aspects to help determine when what I'm done has finished loading (I didn't consider it at first because I thought the graphical aspects were unnecessary and just added complexity). Nil Einne (talk) 17:43, 23 January 2011 (UTC)[reply]

Wikipedia Main page on newest Opera

I don't usually follow this desk, so I don't know if this has been asked before, and it's kinda hard looking for a possible answer through the archives, with the two search strings being Opera and Wikipedia - a lot of white noise there.

So here goes: a couple of weeks ago my Opera auto-upgraded from version 10.x to version 11.00, and I had to do some finetuning to get it back to how I want it. But there is one new thing that annoys me, but I can't figure out how to change it: the Wikipedia main page no longer accepts pressing the enter key as the "Go on now, start looking will you?" command. I have to type the search string into the search field and then grab the mouse and move the pointer to the "search" button, which is annoying. I thought maybe it was some new Wikipedia setting, but pressing "enter" to start searching works fine in FF. I tried looking on the Internet for some hints, but had the same white-noise problem I mentioned above. I got as far as figuring out that I probably need to change something on the about:config internal page of the program, but I have no idea what. Help? This doesn't happen for other pages (for instance, Google) or inside Wikipedia with the search field on the side (or rather, on top, I remembered now that I kept the one before last Wikipedia skin and the search option is on top in the newest one), only on the main page of Wikipedia. TomorrowTime (talk) 21:03, 23 January 2011 (UTC)[reply]

Opera has a long history of being buggy. ¦ Reisio (talk) 21:33, 23 January 2011 (UTC)[reply]

I use Opera 11 and the ENTER key works like a charm for me on the main page when I type in the search box.--Best Dog Ever (talk) 01:30, 24 January 2011 (UTC)[reply]

He means http://www.wikipedia.org/, it doesn't work. ¦ Reisio (talk) 01:37, 24 January 2011 (UTC)[reply]
Yes, that's what I meant. Any ideas if anything can be done about that or is it some sort of bug? TomorrowTime (talk) 04:06, 24 January 2011 (UTC)[reply]
Since Opera 11 seems to still be able to submit form data with the ENTER key on other sites, I'd say it qualifies as a website "bug", and so you might bring it up at WP:VP/T and/or http://bugzilla.wikimedia.org/. As I said, though, Opera has a long history of being buggy, and spending too much time debugging for it is not particularly wise, IMO. ¦ Reisio (talk) 08:36, 24 January 2011 (UTC)[reply]

January 24

Mac OS X weirdnesses

I'm using the up-to-date version of Mac OS X on a MacBook. When I click on several sorts of things I am getting no response. For instance when I click on a Saved, TextEdit document, it does not open. Ditto for regular folders—they do not open. There are other weirdnesses, but those 2 basic examples might be enough for someone to diagnose the problem, which I can't. The happy thing is that I have set up a separate "Test" account, which I'm using right now. This account, on the same computer, is quite separate from the usual account, and everything seems to work fine in this account. Anybody know what's going on? I haven't a clue. Bus stop (talk) 00:39, 24 January 2011 (UTC)[reply]

I'm not sure, but have you tried relaunching the Finder? Apple > Force Quite > Finder > Relaunch. I find when I occasionally have weird user interface glitches that deal with Finder, that relaunching usually fixes it. --Mr.98 (talk) 00:59, 24 January 2011 (UTC)[reply]
Nope. It do not help. Good try, though. I was hopeful for awhile. I am back at my usual account now, and these things don't work. It is so insane not to be able to open folders or open closed TextEdit documents. Thanks. Bus stop (talk) 01:11, 24 January 2011 (UTC)[reply]
I assumed you tried rebooting? Again, I do find that every once in awhile, Finder glitches out on me in weird ways, but that it usually goes away with a reboot or relaunch. If that's not the case, I'd take it by an Apple store and see what they said. The other thing to check, which might be too obvious, is to see whether it is fixed by plugging in an external mouse — e.g. if it has something specifically to do with the trackpad or not. --Mr.98 (talk) 12:40, 24 January 2011 (UTC)[reply]
Mr.98—yes, I have rebooted. I actually had been using a mouse. The problem persists identically with mouse or trackpad. My plan is to move all my files onto a thumb drive, and then move them from the thumb drive to a new account. The "Test" account that I started works fine. Bus stop (talk) 05:17, 25 January 2011 (UTC)[reply]
Two possibilities come to mind: You may have set permission on files in a way that you cannot access them with the main account, or you may have broken the association of file name extension and program. If you feel comfortable with the shell, open a terminal, navigate to the file, and try to open it with file or less, and see if that works. ls -l will show you the permissions. For a typical Mac user, select the file in the Finder and select File->Get Info. Check the owner and the permissions. If all is ok, try to open it by right-clicking or Ctrl-clicking and select "Open with", then specify the particular application you want. --Stephan Schulz (talk) 13:00, 24 January 2011 (UTC)[reply]
Stephan Schulz—it is all files—old and new—so I don't think it is "permissions" as the problem. Also, checking "Get Info" seems to say that I have the "Privilege" to "Read" and "Write". I'm pretty lost when it comes to the Terminal. I ran the "Disk Utility" for "Repair Disk Permissions"—to no avail. I'm just going to move everything to a new "Account". While my files are on a thumb drive I may try to erase the disk and reinstall the operating system off of the DVD for OS 10.6. The trouble with that is I think that will entail a lot of "updates" because we are up to OS 10.6.6. Bus stop (talk) 05:17, 25 January 2011 (UTC)[reply]
Weird indeed. Before you try to reinstall, try to repair the disk. Boot from the OS disk provided with the system, select "Disk utility" from the menu, and then "Repair Disk" on your main drive. Or boot into single user mode, remount the disk, and run fsck manually - but if you are lost with the terminal, this may not be the best way... --Stephan Schulz (talk) 13:05, 25 January 2011 (UTC)[reply]
Thanks everybody. I still can't figure it out. But I am making slight progress. I have found a ludicrous "workaround". The workaround might shed light on the problem. This is what I have discovered: it seems to be only in the Finder that things don't work. For instance I can not launch an Application by clicking on it in the Application folder, but if I drag it to the Dock, at the bottom of the screen, I can then click on it and successfully launch it. The same thing with TextEdit files—if I first drag them to the Dock, I can then click on them in the Dock, and access their contents. And also for folders as might be found on my Desktop—I cannot open them on my Desktop but I can drag them to the Dock and then open them from the Dock. What would cause the Desktop or the Finder (not sure what the difference is) to become dysfunctional this way? And how would that which is in the Dock escape a problem that seems to exist everyplace else? As far as I can tell, that which is in the Dock seems to act similar to an "alias" of a file. But I definitely know little about this stuff. Bus stop (talk) 16:31, 24 January 2011 (UTC)[reply]
I'm afraid I don't know; but when Finder wigs out, which is not entirely unheard of, you get weird results like this. If you do figure out a solution, please come back and tell us, for future reference... --Mr.98 (talk) 13:13, 25 January 2011 (UTC)[reply]
I've just abandoned the account. It's as simple as that. I'm not messing with it. All my stuff has been moved by thumb drive from one account to another. I even gathered up my bookmarks from Firefox and brought them to Firefox at the new account. It's like a ghost town back there. Maybe I'll eventually bring it into an Apple store and let the "Geniuses" see what Steve Jobs has wrought. (I'm really a big fan of Apple products.) Bus stop (talk) 16:46, 26 January 2011 (UTC)[reply]

The crazy thing is I had a problem with this computer some months ago that was similarly frustrating and it solved itself mysteriously. I asked about it here. This is the thread in Archives. I feel like one of those Münchausen syndrome by proxy kind of people, but this is real. The computer does weird things. Bus stop (talk) 15:11, 25 January 2011 (UTC)[reply]

Future 1TB DIMM

Hello,

  I remember a time (when I was in lower primary school) when the largest RAM module available was 256MB, and now we have 16GB DIMMs. In another decade or more, could we have one-terabyte DIMMs? Or would we have reached the point where Dual-Die Packages cannot be shrunk any further. Thoughts?

I'm the one who asked the Future 128-Core Processor question yesterday.

  Thanks. Rocketshiporion 00:45, 24 January 2011 (UTC)[reply]

Based purely on physical space, a DIMM could hold several orders of magnitude more data than any modern-day PC has in hard-drive space. If you could use a cube with 10 nm sides to hold a single bit, a 1.5 cm × 10 cm × 3 mm chip of such a material could hold over 500000 (five hundred thousand) tebibytes of data. It could well be possible to manufacture a transistor less than a cubic nanometre in size (considering it's possible to use a single atom as the active region, per [8]), meaning such ultra-high-density RAM may likewise be possible. The main factor here, though, is whether it's economically feasible. If Moore's law continues to hold all the way down to the single-nanometre scale, we might see 1 TiB RAM DIMMS by 2023.
Of course, it's also quite conceivable that by then, we've moved away from our present-day computing paradigm altogether. For example, there is a very real possibility of non-volatile random access memory becoming the next big thing. Essentially, if you have a medium that is as fast and durable (in terms of rewrite cycles) as modern-day RAM, but capable of holding its data on power-off like magnetic storage, I would imagine that before long, this would be pushed to vast sizes (just look at modern-day SSDs: their capabilities are quite limited, and they are prohibitively expensive, but 1TB models already exist). In that case, it's quite likely the separation between storage and memory will disappear entirely, and that you could partition your NVRAM drive with 1TB intended as memory. --Link (tcm) 09:50, 24 January 2011 (UTC)[reply]
We were happy to have a 7489 RAM. Cuddlyable3 (talk) 19:13, 24 January 2011 (UTC)[reply]
What an interesting video - such a shame it told us nothing about RAMGeneral Rommel (talk) 04:30, 25 January 2011 (UTC)[reply]
@Cuddlyable3 - I fail to comprehend the relevance of those four distinguished Yorkshiremen to a 1TB stick of RAM, or even to the [[List of 7400 series integrated circuits|7400 Series IC]s. Rocketshiporion 11:18, 26 January 2011 (UTC)[reply]

Wgetwin mass download

I would like to download a lot of pictures from a server with wgetwin. Here is the URL for the 1st picture for example:

http://img2.iwiw.hu/0201//user/00/96/24/28/6/user_9624286_1293029264899

These pictures doesn't have extensions. The rest of the picures have roughly the same URL, the last five numbers change. My first idea was:

wget -r http://img2.iwiw.hu/0201//user/00/96/24/28/6/user_9624286_12930292*

but this doesn't work, neither does

http://img2.iwiw.hu/0201//user/00/96/24/28/6/user_9624286_12930292[*]

or

http://img2.iwiw.hu/0201//user/00/96/24/28/6/*.*

I tried a few other syntaxes as well and Flashget too but none of them seems to work. How could I do this? —Preceding unsigned comment added by 195.70.54.59 (talk) 13:21, 24 January 2011 (UTC)[reply]

Globs, in most circumstances, only work when used on local filenames. They are typically interpreted by the shell, which expands to glob (for example) /usr/bin/*hat* by looking through the contents of /usr/bin/, and seeing if any of the files in there contain "hat". But there is no ls (or, since you're using Windows, dir) command for URLs; there's no general way to ask a webserver "what are all of the valid URLs beginning with http://en.wikipedia.org/wiki/?" You'll need to find a list of the URLs you want to download, and run wget on each of them in sequence using the shell. I can't offer advice on shell programming in Windows, though. Paul (Stansifer) 15:36, 24 January 2011 (UTC)[reply]

for i in {00000..99999}; do wget http://img2.iwiw.hu/0201//user/00/96/24/28/6/user_9624286_12930292"$i"; sleep 2s; done You could also presumably do curl 'http://img2.iwiw.hu/0201//user/00/96/24/28/6/user_9624286_12930292[00000-99999]' -o '#1.jpeg', but I'm not personally aware of how to tell curl to have a delay before moving to the next item, and IME no delay will get you blocked fairly quickly by any server that's been decently configured. ¦ Reisio (talk) 18:46, 24 January 2011 (UTC)[reply]

Some web servers, like Apache, provide directory listings to show all files available. Other web servers are intentionally configured to deny listing all files - because they don't want you doing what you're doing. If the server admin wanted you to bulk-download the files, they would have made it possible to list all files through HTTP, or given you access via some other protocol. Nimur (talk) 01:02, 25 January 2011 (UTC)[reply]

Where to post an idea of improving WikiMedia engine?

Hello! Where should I post a concept of enhancement that could be introduced to the WikiMedia engine? The idea deals with making it easier to trace one's own watchlist (in Wikipedia and other Wiki projects).

MusJabłkowy (talk) 16:59, 24 January 2011 (UTC)[reply]

You could vet it through WP:VPD first if you like, or go straight to http://bugzilla.wikimedia.org/. Bug reports usually go farther if you go ahead and attach a patch that does all the work already, but if it's a truly good idea, someone will get it done regardless. ¦ Reisio (talk) 18:49, 24 January 2011 (UTC)[reply]

ocropus examples

I just installed OCRopus, but I find the documentation lacking. Can someone point me to some examples of how to use Ocropus to define a form and then read in filled out forms as data? -- kainaw 18:14, 24 January 2011 (UTC)[reply]

Compiling Darwin (NOT Mac OS X) or XNU (Darwin's kernel) on Linux

Is it possible to compile Darwin or XNU on Linux? I'm using Ubuntu 10.04 LTS (Lucid Lynx). --Melab±1 22:16, 24 January 2011 (UTC)[reply]

I'm sure you can in one way or another, but I wouldn't want to bother myself. ¦ Reisio (talk) 22:22, 24 January 2011 (UTC)[reply]
Here are official instructions from Apple for XNU: http://www.opensource.apple.com/source/xnu/xnu-1504.9.26/README - it looks like all you need is compliant C compiler and Make toolchain (sourcecode for both are provided if you don't already have gcc and make). I haven't independently verified that the build works, though. Other open-source tools for Darwin are listed here. Bear in mind that not everything in OS X is open-source - so even if you manage to get all of these tools to build, you won't have a ready-to-boot OS X clone. (You might not have anything that you can boot). These projects are just the kernel. Apple's ports of a few GNU tools, some shells, and some other programs, are also available. But the kernel alone is not the "operating-system." If you aren't sure what the distinction is, you might want to read our article on the kernel. Also read Apple's Darwin & Core Technologies tutorial. If that document does not provide the information you need, chances are high that you are seeking information that's not available. Nimur (talk) 00:44, 25 January 2011 (UTC)[reply]

January 25

Apple compatible

Why are there IBM compatibles, but no Apple compatible computers (aside computers made by apple itself)? ScienceApe (talk) 02:13, 25 January 2011 (UTC)[reply]

See Mac clone. Dismas|(talk) 02:32, 25 January 2011 (UTC)[reply]
These day, basically all personal computers have the same i386-based architecture, so in some sense, they're all Apple compatible. There are technical issues, but they are relatively minor; see OSx86. The main obstacle is legal: Apple forbids the use of OS X on machines not sold by them. Paul (Stansifer) 04:05, 25 January 2011 (UTC)[reply]
In short, legal reasons. Apple adopted, in the nascent days, a proprietary architecture model, where IBM adopted an open architecture model. As a result IX86 clones abound but apple is the one and only arbiter of appleness. It would be possible to emulate Apple architecture on any X86 if only apple legally allowed it. The fundamental disagreement is one of format, IBM X86 and it's latter-day successors prefer open architecture, Apple prefers to chain users to their system. As a result X86 emulators abound but Apple emulators are nonexistant, thanks primarily to the fact that almost all software companies of note develop for X86 natively. 65.29.47.55 (talk) 08:33, 25 January 2011 (UTC)[reply]
I feel the need to pick at the original question and the answers. "Apple compatible computers" is an imprecise term, because Apple has created several lines of computers. The original poster is asking about modern Apple Macintosh compatible computers. If we reach back to the early days of Apple, there were several Apple II compatible computers, like the Franklin ACE; and in the early PowerPC days there were licensed Mac clones, as described above. That aside, 65 is correct; Steve Jobs has always hated the idea of having uncontrolled 3rd party hardware vendors creating hardware that runs his OS, and Apple has therefore adopted every legal stratagem possible to prevent this (at least, while Jobs has been at the helm). Comet Tuttle (talk) 18:53, 25 January 2011 (UTC)[reply]

Few more questions. Why did IBM not do the same thing? Weren't they hurt by all of the other companies making their stuff? I mean IBM doesn't even make computers anymore. Why was it possible for a 3rd party company like microsoft to make an operating system for IBM computers but it wasn't possible for a 3rd party to make it for Apple computers? Didn't IBM not like Microsoft making an OS for their computers? ScienceApe (talk) 22:29, 25 January 2011 (UTC)[reply]

IBM thought they had. IBM licenced DOS (which was really a CP/M clone) from Microsoft - the IBM PC DOS explains that IBM actively wanted MS to keep copyright (for complicated IBM reasons) - but it didn't occur to IBM that a true clone could be made. They relied on the IBM BIOS copyright from keeping out clones - but Phoenix reverse-engineered the BIOS, which enabled Compaq (and then others) to produce legal IBM compatibles. And Microsoft were very happy to sell those guys MS-DOS. In retrospect IBM's decisions weren't ideal, but IBM still made big bags of money off PCs for a decade or more, so don't cry for them. -- Finlay McWalterTalk 23:11, 25 January 2011 (UTC)[reply]

Finding weev at bop.gov

Hi!

http://arstechnica.com/apple/news/2011/01/goatse-security-trolls-were-after-max-lols-in-att-ipad-hack.ars says that weev is now in federal custody

But he hasn't appeared yet on the Federal Bureau of Prisons locator at http://www.bop.gov/iloc2/LocateInmate.jsp , and I know that so far he has no bail

I tried several variations of his name, and I can't find him on the BOP site

What variations of his name do you think will bring up his records on the BOP site?

WhisperToMe (talk) 05:48, 25 January 2011 (UTC)[reply]

As far as I know, "in custody" does not mean that he has been consigned to the prison system. That won't happen until he has been tried, found guilty, and sentenced. He is most likely in a holding cell somewhere, possibly a sheriff's office local to where he was arrested. --LarryMac | Talk 19:34, 25 January 2011 (UTC)[reply]
In the United States, when a suspect is arrested for a federal offense, whether they were investigated by the FBI or any other federal agency, the arresting officer is usually a United States Marshal, who may choose to work in conjunction with other Federal and local law enforcement agencies. The suspect is almost always placed in the custody of a local (usually a county-level) law enforcement agency that has a previous agreement to hold Federal inmates for a period of time. You can search the county corrections facilities for records of special federal inmates. In the first hours of processing, a suspect will be held by a United States Marshal officer, and within a few hours, transferred to a county sheriff or other local law enforcement or corrections officer operating as a Federal deputy. Because Federal legal procedures are significantly different than local processes, the inmate may be transferred to any jurisdiction in the United States, as circumstances require. Within a certain period of time that can range from a few hours to several days, a Federal judge will initiate a legal process called an "initial hearing." Bail may or may not be granted. (That means the suspect may be permitted to go home, for a length of time decided by a judge, but will be re-arrested later, when the court is ready to proceed). Finally, the suspect is brought back to Federal custody, and the court process begins; during this period of time, the inmate will remain in custody of a Federal officer and/or local deputies. After the suspect is arraigned and sentenced (if convicted), (which can be many years later), they will be immediately transferred to Federal incarceration, managed by the Federal Bureau of Prisons. The handy "Know Your Rights" card from the ACLU is actually stunningly deficient regarding the processes that will take place if you are arrested by Federal law enforcement agents. But you can read all about it on the Marshal Service website: Defendants in Custody and Prisoner Management. "The Marshals Service assumes custody of individuals arrested by all federal agencies and is responsible for the housing and transportation of prisoners from the time they are brought into federal custody until they are either acquitted or incarcerated." So if you are looking for a recently-arrested Federal suspect, you need to contact the Marshals Service, and they will be required by law to disclose which jurisdiction the suspect is being held. That does not mean that they must tell you where he is - only under what jurisdiction he is in custody, if he is actually being held. Contact an attorney if you need help locating a suspect or inmate, because the legal process is complicated; the suspect must be protected by law enforcement agents, and this means that they will not just tell "anybody who asks" where he is. Nimur (talk) 19:59, 25 January 2011 (UTC)[reply]

What font?

What is this font academic papers tend to use quite a bit? http://arxiv.org/pdf/1101.4241 --128.54.224.231 (talk) 07:05, 25 January 2011 (UTC)[reply]

for reasons of formatting most academic papers in my experience use a monotype font most computers are likely to have. Monotype is important because it preserves spacing, especially in situations where precise spacing is important like legal documents. Courier New seems to be quite popular. 65.29.47.55 (talk) 08:36, 25 January 2011 (UTC)[reply]
Computer Modern. Titoxd(?!? - cool stuff) 08:42, 25 January 2011 (UTC)[reply]
The font used at the OP's link is cetainly not Courier New which is a typerwriter-like monospaced font. It is approximately like Caslon and in MS Word the nearest I can find is Brookman Old Style. I think it is more common in older textbooks than modern articles. Cuddlyable3 (talk) 09:20, 25 January 2011 (UTC)[reply]
Good heavens, don't you folk recognise Knuth's Computer Modern when you see it? It was once almost universally used in academic typesetting because it's the default font in [[TeX|Template:TeX]]. Marnanel (talk) 12:42, 25 January 2011 (UTC)[reply]
Exactly. Documents that look like this have been prepared in some form of TeX (usually LaTeX). --Mr.98 (talk) 13:10, 25 January 2011 (UTC)[reply]
Actually Titoxd did Nil Einne (talk) 22:02, 25 January 2011 (UTC)[reply]

How to download and join together text from webpages?

I would like to download all the Exhibitor links from this page http://www.icetotallygaming.com/exhibitors/list , and then join all the details together into one long text document. What would be the easiest way to do it please, rather than doing it manually? I use WinXP, and Firefox, also IE. Thanks. 92.15.2.19 (talk) 12:14, 25 January 2011 (UTC)[reply]

I don't think there's an easy way to do this. You could try using wget to download all the links to one output file with something like
-r -I */list -np -O output.html "http://www.icetotallygaming.com/exhibitors/list/"
however this would include everything on each page (banners, adverts, etc) and result in a very large single html file (my test was around 20mb). From there you might be able to use grep (a windows version available here) or something like that to extract the text you want based on set parameters and build a single text file with just the information you want, but this goes beyond my experience so I don't know how to do that or even if it would work. 82.44.55.25 (talk) 19:07, 25 January 2011 (UTC)[reply]

Thanks. Is there any way to save the html only, not anything else? Saving it all to one file is not essential, as I expect there is software available that caqn join files up. Incidently, where would the output be saved please? (So often a mystery with "nerdy" programs). 92.15.10.209 (talk) 14:29, 26 January 2011 (UTC)[reply]

LibreOffice or OpenOffice Writer file formats

I'm having trouble answering this question because I'm not sure of the terminology in the articles and websites I've looked at.

Can LibreOffice or OpenOffice read and write the file format used in the current format of MS Word?

ike9898 (talk) 14:34, 25 January 2011 (UTC)[reply]

OpenOffice can read/write both the older and current version of MS Word documents. There is issues with image placement and indexes, but the main text opens/saves well. -- kainaw 15:22, 25 January 2011 (UTC)[reply]
Any idea if this is the same for LibreOffice? ike9898 (talk) 15:53, 25 January 2011 (UTC)[reply]
Yes. -- kainaw 16:43, 25 January 2011 (UTC)[reply]

Searching huge number of files, don't want to hang server (grep?)

Hi all,

I am searching for a specific string that could be in any file in a large Linux server. I have access to the server but don't own it, so I don't want to do anything that might slow it down, like some badly formed regex that loops through all the files for ever or something dumb like that. I don't need it to be fast, but I would like to get results as soon as they come in, if possible (i.e. if it finds a result immediately, but still has an hour of searching to do, I don't want it to wait an hour before telling me).

What would be the best way to do this, assuming that server has a standard set of search tools?

The only way I know is to do a recursive grep from the root directory:

grep -r "my text" *

Will that slow anything down if it has to search a huge number of files? Will it return results as they come?

Thanks! 63.138.152.196 (talk) 17:15, 25 January 2011 (UTC)[reply]

You can wrap your grep command (or any other program) inside a shell script. Then, tell Linux to run the script with nice - the nice level sets a "priority" for your script, allowing it to slow down if other programs or users want CPU or disk access. Here's an example. Depending on your Linux/Unix, you may need to set ionice (file and disk access priority) separately from CPU nice - this is a place where your man page on your specific machine will be more helpful than a web search. (man ionice).
The purpose of this exercise is that your script will use 100% of the CPU and disk access speeds, as long as nobody else is using them; but if any other process or user has work to do, your process will gracefully slow down - so you won't "hog the machine" even if your job takes hours. Nimur (talk) 17:23, 25 January 2011 (UTC)[reply]
Be careful with grep -r; the version I have follows symlinks and thus can easily get into a loop. Use find / -xdev -print0 | xargs -0 grep -F string. The -xdev causes find(1) to search only one filesystem (the root), so it avoids things like /dev and /proc that are often special filesystems that don't contain real files. (If your server has multiple interesting filesystems (check with mount), you can name the mount point for each on the command: / /home /tmp or so.) The -print0/-0 allows any file names whatsoever to be used. The -F makes grep(1) search for fixed strings rather than regular expressions. This command or yours will produce results as soon as it finds them (at least if the output is to a [pseudo]terminal, and probably otherwise anyway). --Tardis (talk) 19:48, 25 January 2011 (UTC)[reply]

Coding puzzle

I'm trying in Basic to compare A$(z) with B$(X,Y). If A$() and B$() are identical, then I want a result of 1. So if both strings are "qwerty" or if both are "" then the answer should be 1.

An extra requirement is that if B$() is a substring of A$() then it is only matched if the substring starts from the beginning of the other string. If A$()="QWERTYUIOP" and B$()="QWERTY" then the result should be 1, but zero where A$()="ZQWERTYUIOP" for example.

It took me a long time debugging until I found out that INSTR(1,A$(),B$()) gives a result of 1 where A$()="QWERTYUIOP" but B$() is just "". Not what I would call a match. (Note that INSTR() gives a result of 0 when used to compare "" with "").

Can anyone suggest a convenient way to achieve the above please? There is a wide range for Z and X but only 0 to 3 for Y. Thanks 92.15.22.33 (talk) 18:33, 25 January 2011 (UTC)[reply]

Look into using LEFT$() which should be available in your version of BASIC. Comet Tuttle (talk) 18:46, 25 January 2011 (UTC)[reply]
I was just looking that up, myself. LEFT$(N) isn't equivalent to Python's "startswith", it just returns the first N characters of the string. Some kind of loop with LEFT$ might work, but the first thing it would do is compare the first zero characters of A$ to the first zero characters of B$, and get a match. 81.131.30.212 (talk) 19:01, 25 January 2011 (UTC)[reply]
There's a bit of a logical ambiguity here, because "QWERTYUIOP" really does begin with "". If you want "" to match "" and "QW" to match "QWERTYUIOP", then "" also matches "QWERTYUIOP", unless you specifically rule it out. 81.131.30.212 (talk) 19:01, 25 January 2011 (UTC)[reply]
In most programming languages (including most implementations of BASIC), the empty string is a distinct entity from a null String. (In C we would represent an empty string as a valid pointer to an array whose first element is '\0'; while a null string would be an (invalid) pointer to a null address). Most BASIC implementations use the same concept, even if the implementation of pointers is opaque to the programmer. Nimur (talk) 19:17, 25 January 2011 (UTC)[reply]
I didn't know BASIC could have null pointers. Is it possible to set a string to NULL, then? "S$ = 0" gives "Type mismatch". DIM A(0) produces a single-element array, not a zero-element one. 81.131.30.212 (talk) 19:33, 25 January 2011 (UTC)[reply]
Just to quibble, what you are calling "the null string" is not a string of any kind. In languages with more watertight type systems, in order to talk about something that might be a string, or it might be nothing, you need to give it the type Maybe String or String + Nothing or something along those lines. All strings begin with the empty string. Paul (Stansifer) 20:26, 25 January 2011 (UTC)[reply]
Yes, the concept of a "null string" is pretty weird, and not found in most languages. The dialects of BASIC that have them are really a Java-like languages with slightly different syntax, and don't have a whole lot in common with traditional BASIC. C++ doesn't have null std::strings. C doesn't have strings at all, but does have "null pointers", which are also a weird concept. Special null values in statically typed languages are just a hack to get around the lack of convenient typesafe unions in those languages. -- BenRG (talk) 20:52, 25 January 2011 (UTC)[reply]
Digression: programmers use null for the Maybe semantics, but that's generally not what it's put in the language for. In Java, I believe it was considered necessary so that there was something to implicitly initialize references to, because Java guarantees that it will never examine uninitialized memory. The alternative would be that the programmer would need to provide a default value when they allocated a new array, which would be pretty icky. Truly type-safe languages typically avoid the mess by providing higher-level primitive functions so the programmer doesn't have to muck around with allocation. Paul (Stansifer) 22:20, 25 January 2011 (UTC)[reply]
How about a subroutine like this:
IF A$ = "" AND B$ = "" THEN RESULT = 1 : RETURN
IF A$ = "" OR B$ = "" THEN RESULT = 0 : RETURN
IF LEN(B$) > LEN(A$) THEN RESULT = 0 : RETURN
IF LEFT$(A$, LEN(B$)) = B$ THEN RESULT = 1 : RETURN
RESULT = 0 : RETURN
The 5 lines deal with the following 5 cases:
  1. both strings empty
  2. one string empty
  3. test string bigger than target string
  4. target string begins with test string (edited to remove silliness)
  5. target string doesn't begin with test string. 81.131.46.74 (talk) 19:51, 25 January 2011 (UTC)[reply]
Simpler version:
L = LEN(B$): IF L = 0 THEN L = 1
IF LEFT$(A$, L) = B$ THEN RESULT = 1 ELSE RESULT = 0
this works because LEFT$(A$, 1) is "" when A$ is "". 81.131.14.206 (talk) 02:29, 26 January 2011 (UTC)[reply]

Thanks, will try out the simpler version and make use of it. 92.15.28.68 (talk) 11:59, 26 January 2011 (UTC)[reply]

January 26

Intel Xeon Processor 7500 Series

  In regard of the Intel Xeon Processor 7000 Series, currently there are models which run from 7520 to 7560, and I'm wondering whether any higher model numbers have been announced by Intel for future availability. I tried Yahoo! Search, but wasn't able to get any relevant results. Thanks again. Rocketshiporion 01:56, 26 January 2011 (UTC)[reply]

Qualcomm Brew Jailbreak

Are there any tools out there that can jailbreak Qualcomm's BREW or there new Brew MP platform and allow one to run unsigned Brew applications? And, possibly, does any custom Brew firmware exist? I'm interested in this because I might want to try programming on the upcoming HTC Freestyle. --Melab±1 02:21, 26 January 2011 (UTC)[reply]

BREW is still not broken and probably will not be for a very long due to lack of interest and the complexity of its security. I know that while Verizon used BREW there was no activity in this regard and BREW had very few game choices. Meanwhile, the platform beneath BREW at the time, J2ME, had lots of activity including many big companies like EA making games. --Tatsh (talk) 21:16, 26 January 2011 (UTC)[reply]

Trackbacks for Wikipedia articles

I'm trying to trace mentions of Wikipedia articles in the wider web. Searching the URL of the article in Google throws up some links, but not most as they are embedded or shortened. Topsy performs this service for Twitter ([9]), but is there anyway of doing it for the Internet at large, or at least, say, the blogosphere? Appreciate any related tips, Skomorokh 03:10, 26 January 2011 (UTC)[reply]

Hyperlinks are unidirectional - so there's no way to generate a complete index of all pages that contain links to a specific article. You can search through massive indexes of web-pages that have been spidered and analyzed; but you'd be essentially writing your own search engine in the process. Or, you can use the indexes of major commercial web-search-engines. Google used to support a "link: keyword, for example you could query linkto:en.wikipedia.org/wiki/Albatross - but I just experimented, and I don't think the results are significantly different than searching without the special keyword. Nimur (talk) 18:01, 26 January 2011 (UTC)[reply]
See http://siteexplorer.search.yahoo.com/. -- Wavelength (talk) 18:55, 26 January 2011 (UTC)[reply]
See http://webmasters.stackexchange.com/questions/2611/yahoo-site-explorer-api-replacement.
Wavelength (talk) 19:30, 26 January 2011 (UTC)[reply]

YouTube comments

If you make a comment in reply to someone and it never shows up, is it actually there? Because there's been one occasion where I replied to someone, I was never able to see what I wrote, yet they replied to my comment. However, the comment that I thought never showed up was visible on my cell phone. But in another instance where the same thing happened, the comment was never visible on either my computer or phone. How do I know if it's there or not? I don't want to make a double post days after writing the original comment. 24.189.87.160 (talk) 03:53, 26 January 2011 (UTC)[reply]

It sounds like it could be a caching problem, so that you are not seeing the most current version of the page: you could try Ctrl+F5 to force the page to reload (that works in most browsers under Windows and Linux, but you may need a different command on other systems). --Colapeninsula (talk) 15:19, 26 January 2011 (UTC)[reply]

Cross-compiling C++ programs for Mac OS X on Linux host

(I first asked this here)

Is it possible to cross-compile C++ porgrams on Linux host machine for Mac OS X target machine? 85.131.56.200 (talk) 06:15, 26 January 2011 (UTC)[reply]

Possible, but not easy. In principle the GNU Compiler Collection (which includes a C++ compiler) can cross-compile with almost any combination of systems, though you need to recompile GCC for each different combination of host and target system. So in practice, it's not simple. Googling produces a few answers, here's one discussion[10] which recommends IMCROSS[11] although they say it's not very simple. --Colapeninsula (talk) 15:26, 26 January 2011 (UTC)[reply]
One project that has succeeded in doing this at least once is OpenTTD. Should be compatible. [12] A problem you may often run into with cross-compiling is whether or not ALL the libs you use will work. 90% of the time everything Glibc and other libs do on Linux works on Windows (Mingw) and Mac OS X/Darwin; sometimes you might see missing headers, missing functions/macros, etc. As far as ports that you can plug into a compiler targeting OS X, I have never seen that. --Tatsh (talk) 21:01, 26 January 2011 (UTC)[reply]

Ubuntu and Wubi recovery

I used to run Windows Vista on my laptop with Ubuntu (10.04 LTS) installed via Wubi. Recently, it refused to boot up under Ubuntu, so I booted up under Vista, backed up the entire hard drive onto an external drive, and nuked the drive (the internal one) with Ubuntu 10.04 LTS. The machine is now a single-OS system running Ubuntu 10.04 LTS.

I would, of course, like to recover the content I had in the Ubuntu sector before the crash, but I cannot find a way to access the root.disk file containing that content. How can I do this? --Lucas Brown 16:27, 26 January 2011 (UTC)[reply]

What tool did you use to do the backup? Looie496 (talk) 17:35, 26 January 2011 (UTC)[reply]
While I've never used it, the examples at [13] suggest that the root.disk file is just a regular disk image. You should be able to mount it (as root) with mount root.disk /mnt -o loop (assuming /mnt is the desired temporary mountpoint), possibly adding an explicit -t option if the filesystem type can't be autodetected.—Emil J. 17:52, 26 January 2011 (UTC)[reply]