Jump to content

Wikipedia:Reference desk/Computing

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 84.44.231.244 (talk) at 22:32, 30 June 2011 (Google search for "sex": new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Welcome to the computing section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


June 25

A Software which SEPERATES refrences from the main text of a scientific paper?

does anybody knows such a software?

blessings.

there's hardware. or you could be more specific. --188.28.47.94 (talk) 00:15, 25 June 2011 (UTC)[reply]

grep, perl ¦ Reisio (talk) 00:49, 25 June 2011 (UTC)[reply]

The questioner may be thinking about reference management software, such as Reference manager. --NorwegianBlue talk 11:20, 25 June 2011 (UTC)[reply]

Hello, indeed, i mean for the bibilography... if referance manager is the software who can separate the REF's from the paper than it is the software for me.

thanks. — Preceding unsigned comment added by 79.179.8.59 (talk) 12:47, 25 June 2011 (UTC)[reply]

I posted the following link on my web page, tested it, had several other people also test it, and it worked fine.

<a target="_blank" href="https://www.securedata-trans14.com/ap/stjohnthebaptistcatholicchurch5/index.php?page=10">
<img border="0" src="images/onlinebutton_2.jpg" width="156" height="111"></a>

Next thing I knew, people were calling in saying they clicked on the image and nothing happened. It was easy enough to fix, but what could have possibly caused the link to disappear (as shown on a cached version of the page)? --Halcatalyst (talk) 01:17, 25 June 2011 (UTC)[reply]

What was it you did to "fix" it? ¦ Reisio (talk) 01:28, 25 June 2011 (UTC)[reply]
Incidentally, img elements should not be used this way, you should use a CSS background image instead; and JPEG is for photographs (the 'P' stands for 'Photographic'). [and target="_blank" is just annoying] ¦ Reisio (talk) 01:33, 25 June 2011 (UTC)[reply]
I use a web editor, so I simply put the link address back in. I understand HTML well enough to inspect the code but not to employ it originally. --Halcatalyst (talk) 03:59, 25 June 2011 (UTC)[reply]

If the link wasn't there, then someone or something removed it. ¦ Reisio (talk) 05:24, 25 June 2011 (UTC)[reply]

That's what I would think. However, it wasn't me, and no one else has access to the site. Can pages displayed on the Web be altered directly? I suppose it would be possible for a cracker, but I wouldn't think they would attack my little site. What about something mechanized? --Halcatalyst (talk) 19:32, 25 June 2011 (UTC)[reply]

PHP regular expression

How could i formulate a PHP regular expression to ensure that that there are no control characters in a string? Magog the Ogre (talk) 03:16, 25 June 2011 (UTC)[reply]

Addendum: silly me, I forgot to specify it's unicode. Magog the Ogre (talk) 03:32, 25 June 2011 (UTC)[reply]

Nevermind, I got this one. Magog the Ogre (talk) 07:16, 25 June 2011 (UTC)[reply]

Enquiry about purchase of Monitor Inverter

Hi all,

Currently I have experienced a number of monitor-error issues from my Toshiba laptop, of which the monitor backlight appearance is very blur, faded, and sometimes, discoloured. I wonder whether the primary problem lies either at the Monitor Inverter or at the monitor itself. As I want to try to get a displacement of the Monitor Inverter first rather than the more expensive TFT monitor, I'm about to order one at batterysupport.com.

So this is my doubt: is there any one here having experienced such a monitor-error relating issue like mine before? if yes, how have you been dealing with it? I can get the order online straightaway from batterysupport.com, but to the best of my knowledge, this company is based in China (maybe it's a Chinese company itself) which in turn gives me a second thought about making purchase from the site. Hence, I write here seeking more information regarding to the issue. Thanks in advance. — Preceding unsigned comment added by Predestination (talkcontribs) 03:57, 25 June 2011 (UTC)[reply]

I've had problems with my laptop back-light going out entirely, so I use an external monitor now. This obviously reduces the portability somewhat, but will definitely fix the problem quickly, and, if you choose, you can get a larger and better monitor than you had. StuRat (talk) 17:11, 25 June 2011 (UTC)[reply]

Safe Mode in Windows XP

1) Does it make sense that a computer would run a LOT faster in safe mode than in regular?

2) I understand that some "functionality" is not available in safe mode. But if the computer is doing what I need done, is there any reason I could not or should not use safe mode all the time? Does it cause later problems for example?

Thanks, Wanderer57 (talk) 04:49, 25 June 2011 (UTC)[reply]

unfortunately, Safe Mode is not designed for regular-utilization purpose, but for solving hardware or software conflicts instead. You can temporarily use it for urgent issues; however, in the long term you expose will yourself to vulnerabilitiesPredestination (talk) 05:19, 25 June 2011 (UTC)[reply]
Some WP:OR... In my experience, safe mode is not always faster. One of the bits of "functionality" that may be disabled is disk caching, and that can have a serious impact on performance, depending on what you are doing. Mitch Ames (talk) 09:00, 25 June 2011 (UTC)[reply]
Safe mode disables a lot, a lot of functionality for compatibility purposes, basically so that there is as little reason for the PC to crash as possible (hence the designation 'safe') so you can get your stuff together to fix it. It would probably be a lot of algos in the buttocks to use it long-term. If there's a specific problem you're encountering which hinders your ability to run Windows in normal (contrary to safe - unsafe?) mode, why not describe it here? --Ouro (blah blah) 13:25, 25 June 2011 (UTC)[reply]
Thank you all. My naive assumption was that "safe mode" was called that because it was less suspectible to viruses and other malware from outside. Am I wrong about this?
I went to safe mode while trying to find malcode (and the search programs did find Trojans).
The computer has become slower over time. I found that in safe mode, the computer runs noticably faster. Does this speed difference give any clues as to why the computer has become slower? (aside from the effect of the Trojans of course). Thanks, Wanderer57 (talk) 15:12, 25 June 2011 (UTC)[reply]
Short answer: not really. Something is slowing it down, but with this little information it's hard to say what it is. But it does suggest that safely backing up your data and reinstalling the OS would be something to consider if you happen to have a weekend free some time. --Ouro (blah blah) 18:26, 25 June 2011 (UTC)[reply]
Addendum, because I was thinking over and over about this: there's too much default-loaded software hogging your PC's resources (system memory, CPU time, disk usage), that could be why it's slower when booted normally. It's just a guess though. --Ouro (blah blah) 19:57, 25 June 2011 (UTC)[reply]
Every PC I've ever had has slowed down with time. Reinstalling Windows and the programs you use, then waiting all day for endless updates to be downloaded and installed, will make your PC faster. It is also a good way to be sure you have got rid of any malware. Of course, it is a big step so you should make sure you can find all the install disks and back up your documents, photos, music, emails etc. before starting. Astronaut (talk) 09:30, 26 June 2011 (UTC)[reply]

remote processor networks

Question withdrawn. --DeeperQA (talk) 14:43, 25 June 2011 (UTC)[reply]

If routers have 192.168.1.1 as IP...

...how do I know that I am connecting to my router (in a building with many WLANs). I know that I can connect to my network and go to my 192.168.1.1, but what if I cannot connect to my network Wikiweek (talk) 13:20, 25 June 2011 (UTC)[reply]

...because you are connected to a specific SSID. It is the SSID you see when you choose the wireless "connect to a network" option from your PC. The 192.168.x.x addresses are internal to the network of your access point and your computers that are connected to it, whether wired or wireless. Network address translation (NAT) is used by the access point to translate the external IP address (which is given to the access point by your ISP), into a 192.168.x.x address on the internal network. Many access points assign 192.168.1.1 to themselves. You may think there are many 192.168.1.1 addresses, but you can only get to the one access point that you are currently connected to. If you are not connected to a network, then you cannot go to the 192.168.1.1 IP address. Astronaut (talk) 09:08, 26 June 2011 (UTC)[reply]

Economical side of torrent

How do torrent creators earn? It seems that some have so many that you can exclude for sure a private exchange; it looks much more like a full-time job. 88.9.209.112 (talk) 14:47, 25 June 2011 (UTC)[reply]

Nope, essentially they do it for "glory". It's like how some people can publish hundreds of youtube videos. Lots of free time, and a desperate need for attention drives torrent-makers. i kan reed (talk) 13:41, 27 June 2011 (UTC)[reply]

Making a torrent and adding it to a tracker from data you already have connected to a computer takes a couple minutes, and that's if you don't have it automated. ¦ Reisio (talk) 17:28, 28 June 2011 (UTC)[reply]

filesystem

Is there a type of filesystem or something which automatically adds data redundancy and error correction to files? 77.197.196.159 (talk) 15:35, 25 June 2011 (UTC)[reply]

RAID. StuRat (talk) 17:03, 25 June 2011 (UTC)[reply]
There are distributed filesystems that do this, such as Freenet and Google File System. It would probably be pointless on a (single) local drive, since they commonly fail outright before developing any bad blocks. -- BenRG (talk) 21:25, 30 June 2011 (UTC)[reply]

secure against file corruption

I want to secure a file against the worst type of damage and corruption. What free software does this? 77.197.196.159 (talk) 15:35, 25 June 2011 (UTC)[reply]

If this file doesn't change, then all you need to do is copy it to a safe place, say a CD or memory stick, with no special software required. Store the copy off site, in case of fires, etc. StuRat (talk) 17:02, 25 June 2011 (UTC)[reply]
Sound advice. No file on rewritable media connected permanently to a live system is completely, fully safe. For security make a few copies and store them at different locations. --Ouro (blah blah) 19:44, 25 June 2011 (UTC)[reply]
You could also create a draft e-mail in (e.g.) G-Mail and attach the file to this draft e-mail. This will create a copy of the file on g-mail's servers.
Or you could try something like SkyDrive or DropBox, but that may require installing extra software.
See also RAID. 93.95.251.162 (talk) 10:27, 28 June 2011 (UTC) Martin.[reply]

parchive

If I have a file which got corrupted, would making a Parchive AFTER it got corrupted be able to restore the corrupted parts? If not, what other methods might be able to salvage a corrupted file? 77.197.196.159 (talk) 15:35, 25 June 2011 (UTC)[reply]

The Parchive would help you be able to recover the data to the state it was when you created the Parchive file/s. For example if you later found the original data and deleted the corrupted data you could then use the Parchive file/s to try to get back the corrupted copy of the data. Nil Einne (talk) 16:08, 25 June 2011 (UTC)[reply]
What about AFTER the data was corrupted? The article says parchive uses good data around broken / missing data to guess what the broken / missing data should be. Would that work? 77.197.196.159 (talk) 16:23, 25 June 2011 (UTC)[reply]
No. It may be possible to recover some of the data, depending on the file type and the nature of the corruption, but not using parchive. -- BenRG (talk) 08:04, 26 June 2011 (UTC)[reply]
Repeating what I said above in a different way, if you create Parchive file/s after the data was corrupted they will be useful if you ever want to recover the corrupted data precisely as it was when you create the Parchive files. They will not recover the original data. For example, if you ever retrive the original data you could then use the Parchive files to try to convert the original data to the corrupted data. Nil Einne (talk) 14:55, 26 June 2011 (UTC)[reply]

Storing 15GB of data in my garden for 50 years

If I wanted to store 15GB of data in a waterproof box buried several feet beneath the surface in my back garden for 50 years, what storage medium would be most likely to survive? Assume that in 50 years I'll have a perfect working computer able to read any of todays storage mediums. 77.197.196.159 (talk) 15:35, 25 June 2011 (UTC)[reply]

Paper is the best storage medium for you. — Preceding unsigned comment added by 88.9.209.112 (talk) 16:26, 25 June 2011 (UTC)[reply]
Not really, that would be a huge amount of paper, maybe 3 million pages. Two DVDs (or one double-sided) sounds like the best option to me. Note that you'd need to bury it below the frost line, or the freeze/frost cycle might break through the box in that time period. StuRat (talk) 16:55, 25 June 2011 (UTC)[reply]
And how can you know that DVD-Readers will be available in 50 years? Indeed, no digital device that we use today will probably be available in 50 years. But, just imagine that you print little dots (=bits) on a sheet of paper. In the future, you'll have to scan/take pictures of the sheets and convert that to information. It is probably a lot of paper, but not the 3 million pages that you suggest above.It's probably something like 1 MB/page.88.9.209.112 (talk) 00:14, 26 June 2011 (UTC)[reply]
Why not bury a DVD-Drive with it? Or a Mac Mini. --24.249.59.89 (talk) 21:44, 28 June 2011 (UTC)[reply]
You can't but the OP has already said it's not part of the question. It's a silly question and premise one of the reasons why I didn't try answering it but according to the question you aren't supposed to be worrying about whether you can actually read the media. Nil Einne (talk) 15:01, 26 June 2011 (UTC)[reply]
Unfortunately our article section CD-R#Expected_lifespan says a CD-R disc is expected to last about 10 years only; see Disc rot. Different longevity numbers are scattered in 3 of our articles; Optical disc recording technologies#Longevity says that Mitsui claims its discs will last 100 years (which is obviously an extrapolation and not empirical); but Compact disc#Recordable CD says the design life is 20 to 100 years but discs can die in 18 months. A major reason that 88.9 mentioned paper is its demonstrated longevity of hundreds of years over the right traditions; plus paper is immune to obsolescence. (My old Apple II floppy disks, even if they survived physically, would be very difficult to read just because of the difficulty of obtaining a system that had the hardware to read a good disk.) Comet Tuttle (talk) 17:24, 25 June 2011 (UTC)[reply]
Most of those forms of disc rot would either automatically be prevented by storing them in a locked box (deterioration from UV exposure and scuffing), or could be prevented with proper measures (like storing them in nitrogen to prevent oxidation). The delamination problem I'm not as sure about, but quality discs probably won't do that, if stored as described.
As for being able to read a DVD 50 years from now, that doesn't seem completely impossible to me. DVD is a major format, so many people will continue to have readers/players years after they stop being made. Compare with LP records or the earlier 78s. Many people have record players that support those, even though the format goes back well over 50 years. I'd have less faith that a format which never really caught on, like 8-Track, Betamax, Digital Audio Tape or HD-DVD, would be readable 50 years later. StuRat (talk) 03:00, 27 June 2011 (UTC)[reply]
This. But really, LOCKSS is a better strategy than burying stuff in your garden. 24.6.39.50 (talk) 03:01, 26 June 2011 (UTC)[reply]

I'm curious about how confident our OP is about having access to the same garden in 50 years time. (Not a Computing question, I know.) HiLo48 (talk) 03:10, 26 June 2011 (UTC)[reply]

Considering the OP is confident they'll have a wonder computer capable of accessing any of today's storage media it seems only one minor risk. Nil Einne (talk) 15:01, 26 June 2011 (UTC)[reply]

Any digital archive must be continiously maintained. The file format and media technology used must be kept up to date, you can't bury it in the garden. The best place to keep it is in your computer with adequate backups using current technology. Right now your options include DVD, external hard drive, or flash memory. Roger (talk) 10:00, 26 June 2011 (UTC)[reply]

PErhaps a solid state drive? Make sure that you keep a computer that will be bale to read SSD's General Rommel (talk) 01:46, 27 June 2011 (UTC)[reply]
An SSD won't do because flash memory cells slowly leak charge. According to [this white paper] an unused Flash memory device will be unreadable after 10 years. This is not a problem in normal usage because the flash controller does static wear leveling to refresh little-used memory cells, but obviously that doesn't work without a power source. 130.188.8.12 (talk) 14:17, 27 June 2011 (UTC)[reply]
Dodger67, you're (heh,heh) dodging the question entirely. He specifically said to assume he will have a computer from this time period to view the files. The question is clearly simply about the endurance of the storage material and not data-archiving best-practices in general. APL (talk) 07:08, 27 June 2011 (UTC)[reply]
  • Build a digital archive in your garden; hire a chief archivist, an archivist, and a couple of archival assistants. Fund it for 50 years. Data retention will be high. Long term information storage means having experts at information storage, who attend to the maintenance and format of the data. Fifelfoo (talk) 01:50, 27 June 2011 (UTC)[reply]
Storing it on sheets of rust-proof metal as text or numbers, with the characters deforming the metal rather than just ink which could decay. A more practical solution is that some microfiches are probably processed to survive a long time. According to the article 500 years if you can keep them dry, otherwise 20 years. Ozymandias managed to preserve data for thousands of years. 92.29.127.234 (talk) 08:58, 27 June 2011 (UTC)[reply]
I agree, use microfiches. All you need to read it in 50 years is a magnifying glass. Or even a water drop would work if civilisation has gone pear-shaped and they don't have magnifying glasses anymore. 93.95.251.162 (talk) 14:45, 28 June 2011 (UTC) Martin.[reply]

What is the weird little separator in my Firefox 5 tabs?

There's this weird little animated separator between one tab and another when I use the recently downloaded Firefox 5. I thought at first it just delineated the last tab open so that you could see which one it was when you had many open and the last might be scrolled to the right off the screen but that's not it. I have 8 tabs open now and It's between the fourth and fifth. It's hard to describe so let me see if I can find a text symbol that looks like it.Well not exactly but it's like a vertical line that has a circle at the top--like a pipe "|" with an "o" resting on top, and it's blue and it's between two tabs and stretches a bit below where the tabs end, meaning it overhangs the window a bit. What is that? I tried googling "new features" and "firefox 5" but didn't find anything, and I also looked at Wikipedia's article without luck.--108.54.17.250 (talk) 16:41, 25 June 2011 (UTC)[reply]

Sounds like a bug. I downloaded Firefox 5.0 and created over 8 tabs, but couldn't reproduce it, so it's not a universal bug. StuRat (talk) 16:48, 25 June 2011 (UTC)[reply]
Hi, not an answer as such, but, I have found FF5 to be a little bit buggy - On mine, the tabs across the top of the browser currently obscure the "full screen/not full screen" option and the "x" option to close the browser. I think FF5 may have some issues that need to be resolved. I could of course be entirely wrong :s Darigan (talk) 16:56, 25 June 2011 (UTC)[reply]
Hmm. It looks like a feature. Glitches don't usually look so purposeful, neatly placed, but maybe it is. I took a screenshot. Let me try to upload it.--108.54.17.250 (talk) 17:47, 25 June 2011 (UTC)[reply]
Interesting, it may very well be a feature - I'll check back in a bit to see if you have managed to upload a picture. Darigan (talk) 17:58, 25 June 2011 (UTC)[reply]
Here it is: --108.54.17.250 (talk) 18:07, 25 June 2011 (UTC)[reply]
I thing it's a feature of one of the extensions you have added. Try running firefox in safe mode (firefox -safe-mode) which should disable all the extensions and should confirm that it's not a standard Firefox mac feature. Looking at it I'd suspect it's from some tab-group extension, and it wants you to drag or toggle that blue thing somehow. -- Finlay McWalterTalk 21:00, 25 June 2011 (UTC)[reply]
I'll try the safe mode thing. I don't think I have any extensions though. When I first saw that thing after I downloaded Firefox 5 I tried to access it, drag it, toggle it—it's just static.--108.54.17.250 (talk) 02:02, 26 June 2011 (UTC)[reply]
Note that Firefox 5 attempts to maintain earlier extensions from previous version of Firefox, so perhaps an old extension has some incompatibility with V5. StuRat (talk) 22:53, 28 June 2011 (UTC)[reply]

A device doesn't "support" .txt files?

This is a new one to me. Windows Vista, and I connected a digital camera via its USB cable. When I try dragging a .txt file to any folder within the digital camera's windows, I get a "Cannot Copy File" error dialog which explains: "The file 'thisfile.txt' is not supported on this device. Although this device supports document files, it does not support .txt files." What? Googling some of this error text hasn't helped. Any ideas why this error is being displayed, and what it really means? Comet Tuttle (talk) 17:16, 25 June 2011 (UTC)[reply]

Digital camera’s windows? You’re trying to use a camera as a storage device? ¦ Reisio (talk) 18:46, 25 June 2011 (UTC)[reply]
Take out the storage medium (memory card?) from the camera and access it via a memory card reader connected to your computer. You'll be able to store the text file then (but of course the camera itself will not be able to read it). --Ouro (blah blah) 19:51, 25 June 2011 (UTC)[reply]
Yeah, I need to add something (a bit in the dark because not a Windoze user any more, but there's enough left between my ears to figure it out). Well, if you plugged the camera via a USB cable directly to the computer, Win recognised the equipment as a digital camera, and I suppose that is the reason it doesn't allow you to copy a text file onto it - because, simply put, a digital camera is not supposed to support text files, but store images. Try different file formats. It doesn't matter in this setup that there's a run-of-the-mill memory card inside the device - the connection had been established to a digital camera. Now, what I described earlier, putting the card into a memory card reader and plugging it into the PC is a completely different story - because now the PC will recognise a card reader with a memory card - ergo, a storage device, the interface (driver? logic?) of which is to accept and support any and all file types and formats as you please. I hope this helps further. And sorry for posting two answers, I just had a bit more time to think about this issue. Cheers, Ouro (blah blah) 21:33, 25 June 2011 (UTC)[reply]
Most newer cameras are not mounted as block devices, but by way of Picture Transfer Protocol (PTP). The camera itself may limit what you can transfer to/from it by means of that protocol (and actually I don't know whether text files are even supported by the protocol itself, but my guess would be that they are; why would you build in a limitation to the kinds of files at the level of the protocol?). For example I can mount my camera using PTP under Linux, and list directories with ls and everything, but the Digital Negative (DNG) files don't show up, even though they are definitely there. --Trovatore (talk) 21:41, 25 June 2011 (UTC)[reply]
Thanks for the answers; this is the first I'd really heard of PTP, and it's the first I'd seen a camera (or similar device) mounted as anything other than a regular FAT or FAT32 or NTFS volume. Plugging the memory card itself into a PC did work, of course, and as predicted the .txt file does not show up when plugging the whole camera into the PC again. Comet Tuttle (talk) 20:44, 27 June 2011 (UTC)[reply]

Windows stable work

I haven't had problems with malware since 1990's, but a less fortunate and much less computer literate friend of mine just caught a scareware/malware called Windows stable work. It's a fake (and fairly convincing looking) "anti"virus that launches on startup, prevents running any programs (giving a great choice, between "NO, I don't want to run" and "YES, I want to block this program!") and offers a paid version of the software to remove these threats. Being a bit more experienced with Universal Calculators, I told him how to reach safe mode: now it appears that CLI safe mode doesn't allow the malware to run, allowing us to execute programs without it interfering. However, I'm clueless on how to actually remove that thing, as my friend's installed antivirus/malware programs (I think it was Avira and MalwareBytes) failed to notice the programa non grata. Any help is appreciated. Zakhalesh (talk) 20:30, 25 June 2011 (UTC)[reply]

http://www.google.com/search?q=windows%20stable%20work ¦ Reisio (talk) 21:13, 25 June 2011 (UTC)[reply]
I did do a Google search, yes, but most of the stuff that came up seemed outdated (eg. the registry keys said to be used by the malware weren't present on my friend's machine) and so on. I was wondering if anyone has removed the particular malware recently. Zakhalesh (talk) 13:57, 26 June 2011 (UTC)[reply]
I'd suggest you Boot in Safe Mode with networking, download the Malwarebytes definitions and then scan for any viruses. — Preceding unsigned comment added by General Rommel (talkcontribs) 01:39, 27 June 2011 (UTC)[reply]
  • My process
  1. boot into save mode. This prevents the virus from starting with your pc.
  2. start->run->regedit
    1. Navigate to hkey_localmachine/software/microsoft/windows/currentversion/run
      1. Remove ANYTHING you don't recognize, record the paths so you can find the files
    2. Repeat for hkey_local_machine/software/microsoft/windows/currentversion/runonce
    3. repeat for hkey_current_user/software/microsoft/windows/currentversion/run and runonce
  3. delete any of the files you recorded before making backups in zip files(so they can't be executed)
  4. check the startup folder of the start menu, remove anything there
  5. restart computer, verify program didn't start with task manager

There are lots of other places that malware can use to start up, but the vast majority uses those 2 locations. This is essentially the same thing malware scanners check for, except they only remove KNOWN threats which constantly evolve for the sake of avoiding that. If you want a more comprehensive list of what boots when your computer starts, microsoft distributes a wonderful tool called "sysinternals autoruns". Google that. I simply don't trust anti-malware tools anymore and do my removals manually. i kan reed (talk) 13:59, 27 June 2011 (UTC)[reply]

Wireless

Resolved

Any insights help-deskers can offer on this would be gratefully received since, as usual whenever I encounter IT, it is driving me up the wall.

Until now my home has got by with one computer connected via a USB cable to a modem which in turn is connected by an ADSL cable to a signal splitter which plugs into a broadband-enabled phone socket in the wall.

However, this summer we have students staying with us so clearly a wireless network will be needed for their benefit.

I had an old router and I plugged it in to find that it was broken, so I trundled to the shops to buy a new one.

Now, the new router, it turns out, after I opened the box, doesn't contain an internal modem like my old one did, but needs connection, via ethernet cable, to an existing modem which in turn connects to the wall.

But, as mentioned above, my existing modem has neither an ethernet cable nor an ethernet port to plug one into: just a USB cable.

So where do I go with this? Is the answer:

  1. Take the new router back to the shop and replace it with a different one with a modem built in?
  2. Buy a new modem of the variety required by the new router?

I suppose it is worth adding that:

  1. I'd prefer a solution which is quite cheap over one which is brilliant; but
  2. Too much loss of connection speed would probably really annoy. AndyJones (talk) 20:37, 25 June 2011 (UTC)[reply]
Yeah, take it back and get an integrated one. That's the simplest thing to have, and shouldn't cost much (most UK ISPs just give you one, or sell you one at a knockdown price, when you sign up). -- Finlay McWalterTalk 20:58, 25 June 2011 (UTC)[reply]
I'm surprised you were easily able to get a router without a modem. Most "wireless routers" sold in the UK have a built-in modem. This page from the website of a popular UK retailer lists 23 of the devices, only one of which appears to have no built-in modem. Be careful though: the hard to find technical specification for that one product says it does have a modem, which calls into doubt the accuracy of the information on the website (in my experience with that retailer's website, the only thing you can trust is the price and image of the product). It is probably better to verify technical specs with the manufacturer's website. Astronaut (talk) 08:28, 26 June 2011 (UTC)[reply]
The page you linked is a list of "BT Wireless Routers", and "BT" appears to mean "ADSL" (yes, I know what it really stands for). If standalone modems are routers are hard to find now, that's a pretty recent phenomenon, isn't it? In my admittedly limited experience with UK and US home broadband, they were always separate. -- BenRG (talk) 08:50, 26 June 2011 (UTC)[reply]
The OP self-identifies as British, and I have assumed they are still resident in the UK. It is not really a recent phenomenon. In my experience in the UK, when you go to buy a wireless router from a retail store like PC World, or get one for free when you sign up with an ISP, they will nearly always come with a built-in modem. In the box, you usually get the combined modem-router device, the power brick, a phone filter, a RJ-11 modem cable and maybe an ethernet cable, along with a small manual and a driver disk (some of this is shown in this image). On the rear panel, the device itself usually has the power port, four ethernet ports and one RJ-11 port for the modem cable. If you have a cable router instead, you get slightly different package contents but it is essentially the same device. Back in the early days of broadband provision in the UK, before ISPs considered people might want to connect wirelessly or connect multiple internet-enable devices, ISPs supplied a free USB modem to replace your dial-up modem; this is probably the kind if thing the OP currently has. From what I have read (here on the ref desks and elsewhere) the situation seems to be slightly different in the US, where it seems to be more common to have separate devices. Astronaut (talk) 12:41, 26 June 2011 (UTC)[reply]

Great advice from all. Followed Finlay's advice and all now working as hoped. Many thanks as ever. AndyJones (talk) 19:15, 26 June 2011 (UTC)[reply]

Zero-byte files in WinXP

Using Duplicate Cleaner, I find I have a large number of zero-byte files on my computer. Is it safe to delete them? How much space do they take up? Thanks. 92.29.120.186 (talk) 21:27, 25 June 2011 (UTC)[reply]

They're probably just junk and it's probably safe to delete them (but it'd be safer moving them to a special folder for a while, so you can restore them if they do mean something). Some programs might make zero-size files as a little reminder they can store in the filesystem - sometimes as a lock file and sometimes as a little note to subsequent runs of the program that something has happened (e.g. some programs might create an "initialisation performed okay" zero-size file after their first run, so they don't show the "please input initial setup parameters" screen every time you run it). There are preferred methods for doing some of this (the ATOM table, the registry, the LockFile API) but some programs may make zero-size files anyway. If I understand the NTFS filesystem layout properly (which I definitely don't) then a zero-size file should take up a small-file-record in the MFT; so "not much space", but I don't know exactly how little. -- Finlay McWalterTalk 22:00, 25 June 2011 (UTC)[reply]
MFT entries are almost always 1K, so a file small enough to fit there (up to maybe 600–700 bytes) takes roughly 1K (and a little more for the copy of the filename in the parent directory index). But unused MFT entries are not reported as free space, so you won't see any obvious space gain if you delete the files. -- BenRG (talk) 08:31, 26 June 2011 (UTC)[reply]

Tooltips In System Tray

When I mouse over the icons in my status bar, tooltips come up telling me what each icon represents. In most cases, these tootips cover up some of the other icons, forcing me to move the mouse (and be greeted by further tooltips covering up other icons). How do I switch off this feature? --KägeTorä - (影虎) (TALK) 21:42, 25 June 2011 (UTC)[reply]

I don't think you can remove the feature, but you can remove the icon altogether. General Rommel (talk) 01:37, 27 June 2011 (UTC)[reply]

Which top-level domains are limited as far as who can own them?

Resolved

I know that certain top-level domains like .gov and .mil are not allowed to be privately owned. What about .edu? Could I purchase a .edu site for personal use? (in the same manner that I can purchase a .org site, even if it's not for an organization) ... What other TLDs are limited where they cannot be purchased for personal use? ~ Mesoderm (talk) 23:21, 25 June 2011 (UTC)[reply]

Our article .edu answers the question, I think: no. Looie496 (talk) 00:30, 26 June 2011 (UTC)[reply]
Indeed, "no", but only insofar as you'd have to first purchase something passable as an educational institution. ¦ Reisio (talk) 03:58, 26 June 2011 (UTC)[reply]
More than that, it has to be an accredited college/university (etc) in United States (actually, accredited by an agency approved to accredit by the US DOE). Paul (Stansifer) 21:11, 26 June 2011 (UTC)[reply]
wikt:passable ¦ Reisio (talk) 14:44, 27 June 2011 (UTC)[reply]
See sponsored top-level domain. Gobonobo T C 07:04, 26 June 2011 (UTC)[reply]

Thanks folks! ~ Mesoderm (talk) 17:58, 26 June 2011 (UTC)[reply]

An existing .edu domain certainly doesn't have to be in the US: uofk.edu is the University of Khartoum (Sudan). Also, I think I've encountered at least one small and obscure US "creationist" institution that's not accredited by any agency taken at all seriously yet has an .edu domain. -- Hoary (talk) 00:03, 1 July 2011 (UTC)[reply]


June 26

Saving passwords on Android apps

Is there a way to save the passwords for Android apps I use, so I don't have to type them in every time? Like Wells Fargo app. I have the HTC EVO. CTJF83 00:29, 26 June 2011 (UTC)[reply]

Do you feel good about a person who steals your EVO having access to your bank account? Looie496 (talk) 00:32, 26 June 2011 (UTC)[reply]
....Well Wells Fargo covers it, but is it possible? It's a pain typing in long complicated passwords on a phone. CTJF83 00:35, 26 June 2011 (UTC)[reply]
I have an Android, and I've worked in the computer support departments of a bank before. I am 95% sure this isn't possible; banks would make sure to not to include the capability from the app, and I don't know of any apps which can manipulate other apps. You might find one that can automatically paste a given piece of text, but that's probably more effort than it's worth.
If you're really worried about it, you might try using the web interface and saving the password. I do not recommend this unless you have decent security on your phone already, and frankly, not even then. But if you do implement it, make sure that if you ever lose your phone, that you immediately check your online banking that your password hasn't changed, that your address hasn't been changed, that you haven't sent any secure messages to your bank. Your bank may or may not have a note upon sign in telling you of your most recent sign in time. Magog the Ogre (talk) 01:02, 26 June 2011 (UTC)[reply]
Oh, ok, thanks. And actually now that you mention it, even on Firefox and Chrome it won't let me save any of the 3 banks I use to log in. Like you said, they must have something to prevent it. It makes sense...ok, I'll just be non-lazy and type it in. Thanks to both of you! CTJF83 01:07, 26 June 2011 (UTC)[reply]
It can be done, but it's probably not worth the trouble. ¦ Reisio (talk) 03:55, 26 June 2011 (UTC)[reply]
RoboForm isn't free, but it has an Android application in the market that can do this. It protects all the passwords with a master password, but it will also (as part of how it works) synchronise the passwords to an online RoboForm account as well as any other computers you have connected to the account. Whether you want it to do all this or not I don't know, but the product itself does what you ask and may be worth a look (I use it myself).  ZX81  talk 18:37, 26 June 2011 (UTC)[reply]

adding automatic subtitles for VLC videos?

is this possible?

is there another player who does that if not VLC?.. — Preceding unsigned comment added by 79.179.8.59 (talk) 02:45, 26 June 2011 (UTC)[reply]

What do you mean automatic? VLC will play embedded subtitles, and subtitles of certain formats that have the same filename as the video being played, and are in the same directory. It doesn't find subtitles automatically, but it wouldn't be that hard to make a script or two to, for example, check the entire tree of a directory for subtitles files (even in archives), and use them, or even to find subtitles from sites such as opensubtitles.org. ¦ Reisio (talk) 03:57, 26 June 2011 (UTC)[reply]

Hello. what i need is a plug-in or maybe another player how could automaticly show subtitles to my videos, whatever this will be?.

is there such a thing today?.

thanks. 79.179.8.59 (talk) 13:24, 26 June 2011 (UTC)[reply]

I doubt there is (any public/official support for that), but the data (the subtitles) and the technology (it'd only take a simple shell script or batch file) is all there for someone willing to put in the time. ¦ Reisio (talk) 14:41, 27 June 2011 (UTC)[reply]

Tracking down a group of hackers

I'm talking about a notorious group of hackers who have a significant presence on Twitter. I'm curious as to why they haven't been tracked down already by law enforcement. The VPN they use (by their own account) is HideMyAss which keeps logs for 30 days, and for at least 7 on US territory. Its a US based company too. Judging by their IRC conversations, they only use the VPN, and not any shells or such. Why haven't they been tracked down then? — Preceding unsigned comment added by Temptre.22 (talkcontribs) 10:55, 26 June 2011 (UTC)[reply]

You are referring to LulzSec. The identity of the main players were posted online by a group of kids who admitted to having no real hacker skills. Very quickly, one was arrested, a few quickly left the group, and the rest decided (yesterday) to shut down LulzSec. They are probably busy trying to erase every computer they've used while they wait for the slooooooow arm of the law to knock on their door. (Damn laws - require real evidence to arrest someone.) -- kainaw 13:02, 26 June 2011 (UTC)[reply]
I know who I'm referring to; I avoided naming them. But that's just the thing. Their identity was posted online, but not due to their VPN. Those who quickly left the group left because the group attacked the FBI. And I don't pretend to know why they disbanded, I'm just curious as to why they would choose a VPN that logs, and why the FBI hasn't used the Patriot act to seize logs. Temptre.22 (talk) 13:06, 26 June 2011 (UTC)[reply]
The Patriot Act will do nothing more than speed a warrant request through the court system. That is only useful when trying to get a warrant for logs kept on a computer inside the United States. Where are the HideMyAss servers located? If they are in Virginia, that may explain why the FBI stormed a data center and took all the servers - even those that didn't have anything to do with HideMyAss or LulzSec. -- kainaw 13:19, 26 June 2011 (UTC)[reply]
[1] suggests they used more then HideMyAss. BTW Telecommunications data retention#Commercial data retention and National Security Letter suggests the Patriot Act expanded the scope for NSLs which don't have to be issued by the court and which can be used to demand some non content info, from the sound of it probably including stuff like who was using IP X at time A (see also [2]). Although whether they could be used in this case and the FBI would risk it even if they could I don't know. Nil Einne (talk) 18:06, 26 June 2011 (UTC)[reply]
Thanks for the link, Nil, it was an interesting read. They mention on that forum that the group might have used SSH to carry out the hack. I see the group also discussed a few VPNs, but they all recommended HMA, and at least one of them was using it. This is all hypothetical, but since they were so good with networks and such, why would they use a VPN that logs? What am I missing here? (and don't say they were dumb or something... they might have been many things, but dumb is not one of them) Also, to be clear, NSLs are much faster than typical search warrants. They're issued almost immediately. Temptre.22 (talk) 18:38, 26 June 2011 (UTC)[reply]
Most obvious reason... a friend worked there. -- kainaw 23:01, 26 June 2011 (UTC)[reply]

I don't know about LulzSec, but any serious technophile will have no trouble at not being located by anyone. ¦ Reisio (talk) 14:39, 27 June 2011 (UTC)[reply]

Cat5e crosstalk

Would the reduced crosstalk in cat5e compared to cat5 cable, over about 2 metres, be enough to see an increased bandwidth/speed between a 100mbps router and a pc? Would type of telephone cable between a router and phone socket also make a difference? Bahr456 (talk) 14:55, 26 June 2011 (UTC)[reply]

Do you have any reason to believe the bandwidth of your connection is actually affected by the cable? For a 2 metre cable I doubt it. Bear in mind many of the cheaper network adapters often don't actually achieve the maximum speed particularly if the computer is slow which if it still has only a 100mb adapter it may be. Intel adapters are usually considered the best I believe although 3coms are also considered not bad and nowdays even the Marvell and RealTek GbE aren't too bad (RealTek fast ethernet adapters were often considered rather crap).
Also I'm confused about your setup. Are you referring to a LAN? If so what does the router or phone socket have to do with it? If you are referring to an internet connection using the phone line, I guess some sort of DSL, I quite doubt you have anything close to a 100mbit connection so there is very likely no point worrying about the connection between the router and PC.
Improving the household cabling for the phone line may help a bit, you'd first want to install a dedicated DSL splitter at the point of entry of the phone line, if one isn't already installed rather then using microfilters on each phone jack. Using cat5 between the splitter and xDSL router may also help. But don't expect miracles, there's a fair chance the difference won't be much if anything [3]. Also are you sure your bandwidth is not being limited by your ISP whether at the DSLAM or the IP level? If it is then and you're getting the advertised speed then there is likely no way you can improve bandwidth other then to upgrade plan.
Nil Einne (talk) 15:17, 26 June 2011 (UTC)[reply]
Thanks for your answer. Im referring to a DSL line. My router is profiled for an 18mbps download speed but speed test results only give a result of about 15mbps. 18mbps was what the ISP set the line speed to, saying its the best the line can handle, guaranteeing a reliable connection. So I assumed the drop in speed was either the internal wiring or the router? Bahr456 (talk) 17:40, 26 June 2011 (UTC)[reply]
Is the router actually syncing at 18mbps? If it is, then there's probably not much you can do about the speed. It's quite unlikely to be the connection between the router and PC. And if the router is syncing at 18mbps and your ISP is only willing to let you sync at that speed then there's probably not much you can do. Bear in mind Asynchronous Transfer Mode has about a 10% overhead so if DSL router is syncing at 18mbps you will only expect about 16mbps via any sort of TCP speed test (I think you should actually use about 13% to take in to account TCP/IP overhead). Nil Einne (talk) 17:50, 26 June 2011 (UTC)[reply]

Most error correcting file compression?

Is there any file compression type that is better at error correction than others? Given the same recording medium, what archive formats are better at reconstructing missing/damaged data? I read about [RAR]'s recovery volume (.rev) option, but I can't find any Mac RAR client that can read/write these files. --68.102.167.95 (talk) 18:13, 26 June 2011 (UTC)[reply]

Have you tried Parchive? 118.96.163.71 (talk) 13:18, 27 June 2011 (UTC)[reply]

Right to left encoding

Hello, I keep having problems with copying right to left encoded text , eg from this page http://www.mapnaec.com/persian/ I got شرکت مهندسی و ساخت برق و کنترل مپنا which has curious properties when I try to select it with a mouse from end to end (I assume others will get this too).

Can someone explain what is happening to cause this, is it standard behaviour, and is it unavoidable on a computer set up for left to right text. Or whatever you want to tell me. Thanks.Imgaril (talk) 19:23, 26 June 2011 (UTC)[reply]

It is complex behaviour but I don't think entirely unintentional. It's just that you are trying to mix two systems (LTR and RTL). It assumes you want to copy a passage in order. Imagine the following example in Latin text (since the character set has nothing to do with it): "lufrednow si taht, yM", Mr Backwards exclaimed. Now, selecting from right to left, your browser assumes you want to move backwards through the prose, in which case the order is exclaimed - Backwards - Mr - lufrednow- si - taht - yM. The browser "Helps" you select it in that order, which is weird but makes perfect sense for many usages. - Jarry1250 [Weasel? Discuss.] 20:41, 26 June 2011 (UTC)[reply]
Yes thanks - I'm starting to understand it now - I haven't check but it seems obvious that the byte order and display order are swapped in the persian text, causing the odd looking behaviour. I supposed there must be some sort of control character in their, or perhaps that's inferred simply because of characters coming from the "right to left" unicode code space. It suprised me that even notepad does it...Imgaril (talk) 18:33, 27 June 2011 (UTC)[reply]


June 27

Thermal efficiency of FPGAs vs. conventional CPUs

How do field-programmable gate arrays compare to conventional CPUs in the amount of heat produced per operation, and in the amount produced in the process of loading a program versus reconfiguring an FPGA? NeonMerlin 07:43, 27 June 2011 (UTC)[reply]

Excel: Date Calculation

I've been sat here for a while trying to figure out the best solution or even an easy solution but I can't seem to find one to a calculation related with dates.

I have an Excel spreadsheet which has a column with =today() so it displays todays date. I also have a column which includes the start and end date or a project as well as other columns for amount of days between and working days etc. Basically the issue is I would like to work out the percentage of time elapsed for each section. So using the date 27/06/2011 - if the start date was the 26/06/11 and end date - 27/06/08 it would say it was 100% complete. Is this even possible?

Many Thanks 195.49.180.85 (talk) 08:10, 27 June 2011 (UTC)[reply]

(I presume you mean 27/06/11 for the end date.) The basic formula would be =(TODAY()-StartDate)/(EndDate - StartDate) (and then format the cell as a percentage), but you'd have to be careful if it was possible for Start date and end date to be the same - maybe =IF(StartDate=EndDate, 1, <formula as before>). If you used NOW() instead of TODAY() you'd get a more precise figure - using TODAY() will give very coarse results if the start and end are a small number of days apart. AndrewWTaylor (talk) 08:19, 27 June 2011 (UTC)[reply]
Yes you are right 27/06/11, Have no idea why I used 08. I will give that a try. Thank you :) 195.49.180.85 (talk) 08:25, 27 June 2011 (UTC)[reply]
Of course, you need to decide what you want to be displayed if EndDate is earlier than Today(). If in this case you want to see 100% (rather than a figure that is greater than 100%) then you need to code another exception e.g.:
=IF ( OR ( TODAY() >= EndDate, StartDate = EndDate), 1, <formula as before>)
Gandalf61 (talk) 09:09, 27 June 2011 (UTC)[reply]
Better to use enddate-startdate+1 for the denominator and not say it is finished till the day after the end. Anyway such dates just give how much should be completed rather than is completed. I'd advise getting a project planning tool if you start doing much of this, they can deal with dependencies and suchlike things nicely. Dmcq (talk) 11:22, 27 June 2011 (UTC)[reply]
Second that -- depending on what you "expect" to see, date conversion in project planning software is a fairly significant piece of the code.
The convention is normally that the beginning of a task starts at the beginning of a day, and the end of a task at the end of the day. In this manner, you get what you expect for a one-day task, i.e., it starts and ends on the same day. The task which immediately follows that task, though, can't start until the next day, even though it might logically be only one minute later.
Trying to work around weekends and holidays in Excel will give you an even bigger headache. It really is a case where specialized software is worth it. DaHorsesMouth (talk) 00:17, 28 June 2011 (UTC)[reply]

Desktop speaker volume detection?

How does my computer "know" my desktop speaker's volume (dial) setting? If I issued the "ECHO Ctrl+G" command from the Command Prompt and the speaker is turned up, the beep will be played from the speaker. However, if the speaker is turned down (or off), the beep will be played from the internal (PC) speaker instead. How does the computer do this? The only thing that connects the speaker and the computer is a TRS audio cable connected to the speaker's "Input" jack and the computer's "Front Output" audio jack. Many thanks. 118.96.163.71 (talk) 13:14, 27 June 2011 (UTC)[reply]

Completion (or not) of an electrical circuit, I’d imagine. (if [is completed] { use speakers } else { do something else }) ¦ Reisio (talk) 14:46, 27 June 2011 (UTC)[reply]

Remote file system manipulation without FTP

Sorry for the confusion subject — I couldn't quite think of a pithier way to describe this.

I'm working on a project where I need to be able to put and remove files to a remote server.

The problem: the remote server only supports Secure WebDAV (WebDAV with HTTPS/SSH) as its mode of access. No FTP, no SFTP.

So it's easy for me to manually put files onto said server. But I'd like to be able to do it with PHP. PHP doesn't natively support WebDAV. There is only one PHP WebDAV class on the web that I've found, and it doesn't support secure connections. You can compile PHP to have WebDAV support, but the server I'm hosting the PHP on does not have it compiled into their version of PHP and it doesn't seem likely that I can convince them to do so.

So for the moment I'm trying to brainstorm other options. The remote server can run PHP. So I've been thinking about using some sort of script on the server itself which would receive the file data, e.g. through some sort of secure connection. I don't really know what this would be called. It's not the sort of thing I have experience with, but presumably you could have one that would act as kind of a mini-FTP server. Is this a possibility? Does something like this have a name? Would it work?

Any other ideas? --Mr.98 (talk) 15:37, 27 June 2011 (UTC)[reply]

One other (odd) idea: the "incoming" files to the remote server are all coming from one source, which does have FTP access. I could imagine a script on the remote server that would receive (encrypted?) instructions from the source telling it to connect via FTP (using PHP's built-in FTP functions) and download files to various locations. Kind of a hassle, though.
Another idea is to try and drop into a shell and use rsync or something like that. (Though now that I look at it, it doesn't look like rsync natively supports secure WebDAV either. Damn!) --Mr.98 (talk) 15:48, 27 June 2011 (UTC)[reply]
If you're on Linux (and the like) you can use davfs2 to mount the remote filesystem locally, over WebDAV. The manual page shows examples of it being mounted with an https URL. With the filesystem mounted locally, you can use whatever you like to manipulate the files, as if they were really stored locally. I've never personally used this (nor WebDAV at all) - I'd be interested to know how you get on. -- Finlay McWalterTalk 16:02, 27 June 2011 (UTC)[reply]
The problem is that I'm not doing this locally. Ideally the "local" aspects of this would all be hosted on another (different) remote server. I doubt that the Bluehost server which will be hosting the scripts in question will be able to mount drives. This whole thing is such a mess because the main (academic) webservers have been apparently configured by monkeys, and are apparently not meant to be useful to anyone, hence having to do ten thousand work-arounds for what ought to be a simple CMS... --Mr.98 (talk) 16:09, 27 June 2011 (UTC)[reply]
Ah yes, I remember you suffering the last time with this setup. Are you utterly attached to PHP? The PyDAV Python library supports https connections, and the example enclosed with it suggest that to put a single file you'd do nothing fancier than:
#!/usr/bin/python
from WebDAV import client
res = client.Resource('https://crappyserver.annoying.edu', username='mr98', password='oBsCuRa')
data = open('myfile.html').read() # read a local file and...
res.put(file=data)                # push it to the WebDAV server
-- Finlay McWalterTalk 16:33, 27 June 2011 (UTC)[reply]
Yeah, I'm pretty attached to PHP — both because it's what I know the best, and because this whole thing is going to be paired up with WordPress one way or another, alas... Blargh. I am not quite paid enough to deal with this idiotic server. --Mr.98 (talk) 16:50, 27 June 2011 (UTC)[reply]
You can use a command-line tool easily in PHP. For example, you can use cadaver (if you are on Linux) with popen. I would use $pf=popen("cadaver https://your.server.info", "w"); so you can then pump commands to it with fputs($pf, "YourPassword"); or fputs($pf, "put YourNewFile"); Then, wrap that up in a function like function my_cool_secure_webdav($filename). -- kainaw 18:28, 27 June 2011 (UTC)[reply]
I checked cadaver and it is a bit finicky, so I'd use proc_open instead with read/write to interact with it properly. -- kainaw 18:45, 27 June 2011 (UTC)[reply]

Robots.txt

Somebody on the NTSB added robots.txt to the website. This means that the arhcived content at http://www.ntsb.gov is now unavailable

Has anyone found a way to bypass robots.txt so that one can look at the content on http://web.archive.org/*/http://www.ntsb.gov ? What arguments could I use if I was corresponding with NTSB officials and asking them to remove robots.txt from the site?

The US government documents are in the public domain, and stuff on the public website is not classified, so I do not know why the NTSB added robots.txt. WhisperToMe (talk) 18:10, 27 June 2011 (UTC)[reply]

Adding robots.txt reduces load on the server from search engine spiders. Because information is free, does not mean it must be available in the specific manner that you want to receive it. They could place it on file in a file cabinet in the basement of an old missile silo in a corn field in Kansas and still claim it is free - you are free to go and find it. So, any argument you make is going to be seen as, "I want you to change how you do things because I don't want to change how I do things and I'm offering nothing in return." -- kainaw 18:30, 27 June 2011 (UTC)[reply]
Responding to that argument, I would say "you already did things X way, so why are you changing it now?" (the documents had been on the NTSB website for years) - It would be far easier for the NTSB to leave the old content up than to remove it and then face inquiries asking for the NTSB to restore the material.
And my point of view is "The agency works for the American people, so it must continue to offer the same level of access to this information unconditionally"
WhisperToMe (talk) 18:42, 27 June 2011 (UTC)[reply]
(edit conflict)They probably won't give you a (decent) answer as to why they changed it. I'm not aiming this at you personally, but I wouldn't, it's none of anyones business how I run my webservers. But what we do know is they have intentionally tried to blocked respectable crawlers which could be either because they didn't like the resources they used up trawling the site, they didn't like the idea of someone else caching the site or easily copying everything or they just simply want people to enter the site and search it using their own tools (or any other reason).  ZX81  talk 18:52, 27 June 2011 (UTC)[reply]
If this was a private webserver, I could totally understand "but I wouldn't, it's none of anyones business how I run my webservers" - However in this case I have a different attitude because the NTSB is an agency of the US government, and I am a US citizen and I believe the agency's job is to serve the American people by continuing to provide the information on old accidents. I could potentially ask my federal representatives to look into the matter. WhisperToMe (talk) 19:58, 27 June 2011 (UTC)[reply]
Sorry I meant no offence, but from what I now understand (based purely on the text above/below), they've already removed that information from their site and basically you were just using archive.org to view it because it wasn't on the real site anymore? I have no idea about US laws or what they're required to do with public information (I live in the UK), but I'm guessing that like Kainaw said right at the top the information is probably still available, just not in the way you want to retrieve it. But it certainly wouldn't hurt to ask them, I was just trying to say I don't think you'll get a very helpful reply back.  ZX81  talk 20:45, 27 June 2011 (UTC)[reply]
It's okay - I understand that the argument above was just a devil's advocate response :) - Anyway, the information was taken down from the previous locations and the NTSB has not stated if there were any new locations for the data - the site search I used did not find the public docket info I was looking for. I have sent e-mails to the NTSB, and I am hoping to receive responses.
It's possible that the NTSB could charge a fee if it asks me to order CDs of the information, while when it was on the NTSB website it was free.
WhisperToMe (talk) 21:46, 27 June 2011 (UTC)[reply]
I don't think there's a way to bypass it. According to the faq page "When a URL has been excluded at direct owner request from being archived, that exclusion is retroactive and permanent." That seems to imply that they destroy all previous archived snapshots of a site if robots.txt is added. This page also states that adding robots.txt to a site "will remove all documents from [the] domain from the Wayback Machine." Whether they actually delete the previous archives or just block them from public view I don't know; it would be a huge loss if they really did just delete them all. AvrillirvA (talk) 18:49, 27 June 2011 (UTC)[reply]
From my understanding they just block them from view. There was a case where a website once had robots.txt (Sosoliso Airlines website) - See this revision - but when that went away, lo and behold, all of the documents became visible again, and now Wikipedia links to websites on that website's archive. WhisperToMe (talk) 19:52, 27 June 2011 (UTC)[reply]
(After e/c)
To answer the basic question : No. There is no way to fool Archive.org into thinking a site has different permissions than it really has.
Of course robots.txt has no technical power. It's based entirely on the honor system, but Archive.org honors it. And with good reason. If they became known as a malicious spider it would severely hamper their ability to do their job. APL (talk) 18:53, 27 June 2011 (UTC)[reply]
In that case, I may have to wait maybe 10-15 years (or however much time until robots.txt is removed?) - Or see if the NTSB will re-upload the content, or see if my federal representatives could ask the agency to restore the materials in the public docket related to accidents in the late 1990s and early 2000s. WhisperToMe (talk) 20:01, 27 June 2011 (UTC)[reply]
This CRS document may be of use; the Obama administration has declared that they want the US government to be more transparent than ever before. You should be able to talk someone at the NTSB into agreeing with you that the new robots.txt addition has made it less transparent, even though, if I understand the above correctly, this has to do with organizations outside of the control of the NTSB; and therefore it's at odds with the government's goals as expressed in this document and the directives it cites. Comet Tuttle (talk) 20:40, 27 June 2011 (UTC)[reply]
Thanks for finding that, Comet Turtle! That's really helpful.
If there were outside agencies that asked for robots.txt, I'm not sure who it could be. The file could be reconfigured so that only material from the outside agencies would be affected.
WhisperToMe (talk) 21:49, 27 June 2011 (UTC)[reply]
What I meant was: The impact upon you (of adding the robots.txt file) was entirely at archive.org, if I understand correctly, so from the point of view of the NTSB webmaster, you're complaining about some external website he has no control over. I think a useful angle, when discussing this with anyone at the NTSB, would be to discuss how archive.org helps disseminate this information, which is totally within the Obama administration's transparency mandate discussed in that linked document; so if they can help out archive.org by modifying one file, they oughta do it. Comet Tuttle (talk) 22:14, 27 June 2011 (UTC)[reply]
In this case the webmaster does have control. Web.archive.org does not display archived pages on a particular domain if a robots.txt file is on the current domain. The NTSB just recently included a robots.txt file with its new website, so all of the old archives are blocked. If the NTSB modified or removes the robots.txt file, web.archive.org will display the archives again. WhisperToMe (talk) 22:35, 27 June 2011 (UTC)[reply]
Yes, I understand all of that. But the NTSB webmaster doesn't control that behavior over at archive.org. It's not his fault that they behave the way they do. Comet Tuttle (talk) 00:23, 28 June 2011 (UTC)[reply]
If information is not available on the NTSB website, you can pleasantly write to the NTSB web administrator and ask them to please provide a link to the content you want. If you are certain that the NTSB is withholding documents from you, you can file a Freedom of Information Act request at http://www.foia.gov/ - and your government representatives will mail, email, fax, or otherwise transmit any documents that they have. FOIA is administered by a separate Federal agency, with the intent that FOIA representatives will not collaborate with any "coverup" by any particular agency. There are certain legal requirements that documents, if they exist, must be delivered in a certain amount of time; but of course, specific documents that don't exist can not be delivered! And you must trust the administrator of your FOIA request is diligently looking for the information you have requested. If, even after a FOIA request, you still feel that the Federal Government is withholding information, you can pursue civil and legal actions, under the advice of an attorney, to sue the Federal Government for access to the information you are requesting; and a judge will decide what the best course of action is. But let's be clear: just because something used to be on the public website does not mean that the Federal Government is required to host it on the internet in perpetuity. Furthermore, just because the Federal Government is legally obligated to provide access to documents does not mean that they must provide access via any particular internet website. Nimur (talk) 22:30, 27 June 2011 (UTC)[reply]
Thank you for letting me know about the FOIA, Nimur. In this case, all of the documents have existed. I'm going to see what the NTSB says when it e-mails me back. Honestly, I would be okay if I got all of the contents from those public dockets, so the contents can be re-published on wikisource and/or the Commons. WhisperToMe (talk) 22:39, 27 June 2011 (UTC)[reply]
Just a correction to what is otherwise correct in Nimur's post: FOIA is not administered by a separate federal agency. It is administered by FOIA divisions within the agencies in question. They are very much "in house," which becomes very clear if you file a lot of them. (In my experience, paradoxically, the more "closed" the agency, the better they are at handling FOIA requests. Give me FBI and CIA processing my requests any day — they know what they're doing. NARA, not so much.) There is FOIA oversight for guidelines, procedures, etc., but they are very much administered by the individual agencies. --Mr.98 (talk) 00:17, 28 June 2011 (UTC)[reply]
My error. "Each federal agency processes its own records in response to FOIA requests." Thanks for the correction. Nimur (talk) 03:32, 28 June 2011 (UTC)[reply]
http://www.amtonline.com/article/article.jsp?siteSection=1&id=8331 says that beginning in 2009 all public dockets should be released to the NTSB website in accordance with the FOIA plan... - I'm checking to see if the older dockets will still be online, though... WhisperToMe (talk) 16:50, 28 June 2011 (UTC)[reply]

Guys, http://web.archive.org/web/20070321223455/http://www.ntsb.gov/events/KAL801/ is working now! Robots.txt seems to have been altered to let us see old archives. YES! WhisperToMe (talk) 17:00, 28 June 2011 (UTC)[reply]

Swedish IPRED Law - keeping logs mandatory

The law / directive implemented by Sweden named IPRED (or somesuch) prompted several ISPs to start destroying their logs, to prevent the authorities from getting their hands on them. This was a while back. Has this law been changed, or have any new laws been passed, that force ISPs and other service providers to keep logs of all activity? I know of at least 2 (Swedish) VPNs (one of which is the Pirate Bay's webhost) that claim that they don't log anything, so I'm not sure what to make of this. Temptre.22 (talk) 19:11, 27 June 2011 (UTC)[reply]

To the best of my knowledge, and from my reading of , the law has not been changed. It should be noted, however, that
(...) Rättighetsinnehavare har dock möjlighet att begära att tingsrätten, med stöd av 26 § i lagen (1996:242) om domstolsärenden, förbjuder internetoperatörer att radera informationen, i väntan på att utredningen om ett eventuellt informationsförläggande har slutförts (...)
Meaning roughly, that a court of law may at its discretion prohibit ISPs from deleting logs after a plaintiff has brought a request for information before it, until the case has concluded, as this could constitute spoliation of evidence. See also WP:IANAL. Regards, decltype (talk) 07:16, 28 June 2011 (UTC)[reply]

firefox

how to backup favorites in firefox 5 ? — Preceding unsigned comment added by Tsp12345 (talkcontribs) 19:46, 27 June 2011 (UTC)[reply]

Ctrl+Shift+B should bring up the bookmarks library, then click "import and backup" AvrillirvA (talk) 20:54, 27 June 2011 (UTC)[reply]
Backing up your profile will get all your prefs and bookmarks and other goodies. To use with a new/different installation, you can just copy the contents into another profile dir (over the existing), or use Firefox's -ProfileManager option. ¦ Reisio (talk) 17:51, 28 June 2011 (UTC)[reply]

Wikipedia content automatically posted on Facebook

Apparently someone is using a bot to automatically create Facebook pages for Wikipedia articles. E.g this page.

Initially I thought someone had added it manually but I heard that it happens more often. Is it someone who wants to promote Wikipedia who does it or some search engine spammer?

I know it must be some bot because I improved said article a bit a while back and the lead picture changed on Facebook soon after I changed it on WP. SpeakFree (talk) 21:06, 27 June 2011 (UTC)[reply]

I don't use facebook but it looks like they're hosting a mirror of Wikipedia. Wikipedia:Mirrors_and_forks/Def#Facebook has some information. AvrillirvA (talk) 21:15, 27 June 2011 (UTC)[reply]
Well they are welcome to do so as they cite the license and link to the contributors. I guess Zuck & co. do it to generate more traffic to their site. SpeakFree (talk) 21:23, 27 June 2011 (UTC)[reply]
I've been aware of this for sometime, maybe a year - it appears to be fairly systematic.
Additionally I've noticed that two versions of articles are often created, one truncated article eg http://www.facebook.com/pages/MaK-G1206/110205232364409?sk=info and one full article eg http://www.facebook.com/pages/MaK-G1206/110205232364409?sk=wiki
It's not clear if this is a facebook official effort, or if someone is using a bot. OR WHY ??83.100.186.201 (talk) 21:26, 27 June 2011 (UTC)[reply]
If you look at a Facebook page that has a Wikipedia article, in my experience you'll see that there's not much else there. It's probably done so that these pages actually have some content. Wabbott9 (talk) 15:22, 28 June 2011 (UTC)[reply]

The only thing which is a bit iffy in a legal way is their use of images from Wikimedia Commons without citing the creators per the CC licence. But I guess they don't care, there must be millions of copyright infringing images on Facebook. SpeakFree (talk) 21:33, 27 June 2011 (UTC)[reply]

Edit: If you click on the image on the page it takes you to the Wikimedia Commons page. Don't know if that's enough though for citing the creators. SpeakFree (talk) 18:29, 28 June 2011 (UTC)[reply]
It should be since we do the same thing a lot of the time. The only risk with linking to an external site for the contrib history is you remain responsible for following the license so if the page disappears or changes to something unexpected you may find yourself violating the licence Nil Einne (talk) 15:35, 29 June 2011 (UTC)[reply]

Facebook community pages may incorporate content from Wikipedia— such use complies with Wikipedia policies on reuse of content. We at Wikipedia have no control over how the content is included nor can we help to remove it. Facebook does have a topic on Community pages and profile connections on their Help Center. ---— Gadget850 (Ed) talk 12:19, 29 June 2011 (UTC)[reply]

Although unsurprisingly they did speak to the foundation about it and offer to work with them in some fashion an offer that was accepted [4] [5] [6] [7] [8] Nil Einne (talk) 15:35, 29 June 2011 (UTC)[reply]

Transferring DVDs to computer

My dad recently got our old home videos transferred from the original cassettes onto DVD. Now I want to put those files onto Windows Movie Maker to edit out all the boring junk like dance recitals and keep all the cute and funny stuff. But Movie Maker doesn't recognize that there's anything on the disc, and Windows Media Player doesn't recognize there's anything to rip. When I play the DVD on PowerDVD it works fine, but when I try to play on Media Player there's no sound... What can I do to be able to import these videos into Movie Maker? If it makes a difference, when I open up the DVD drive in Windows Explore, it shows the files on the DVD as having VOB, IFO, and BUP file extensions. I've never heard of those before. (When replying, please bear in mind that I have a good working knowledge of computers, but no clue about technical stuff. So when in doubt, dumb it down! :) Thanks.) Cherry Red Toenails (talk) 21:31, 27 June 2011 (UTC)[reply]

I don't think Windows Movie Maker understands VOB files, so you will need to convert them into a format which it does understand. Windows Movie Maker#Importing footage lists the formats it will understand. You can use free programs like Handbrake, MEncode, AutoGK or Avidemux to convert them into a suitable format, probably .avi would be best. Here's a guide for converting with Avidemux. In addition, Avidemux offers limited editing capability so depending on how complex the task is that might be good enough AvrillirvA (talk) 21:49, 27 June 2011 (UTC)[reply]
The VOB, BUP, and IFO files are the normal format for commercial movie DVDs (not Blu-ray). Note that there may be region codes with this format which only allow them to (legally) played in certain geographic areas. There is a lot of software out there to rip those, but the one I use is Linux, so I'll let others make suggestions for Windows. StuRat (talk) 22:03, 27 June 2011 (UTC)[reply]
Of the software suggested by AvrillirvA, the easiest to use is probably Handbrake. It's super easy to turn DVD files into AVI files. --Mr.98 (talk) 23:16, 27 June 2011 (UTC)[reply]
I tried Handbrake, but could only get it to make m4v files (which Movie Maker still doesn't recognize). How do I get it to convert to avi? Cherry Red Toenails (talk) 05:34, 28 June 2011 (UTC)[reply]
I just checked and .avi support was removed in version 0.9.4, sorry. Version 0.9.3 is available here AvrillirvA (talk) 09:33, 28 June 2011 (UTC)[reply]
Thanks AvrillirvA! I got it as an avi file now. My new problem is that Movie Maker recognizes it as an audio track, not video. And even when I put it in the audio line, there's still no audio. Just a silent black screen... If anybody knows how to fix that, I'd sure appreciate it :) Cherry Red Toenails (talk) 19:07, 28 June 2011 (UTC)[reply]
I take it back--I do have audio. But still no video. Cherry Red Toenails (talk) 19:08, 28 June 2011 (UTC)[reply]

It has been ages since I last used Movie Maker (back on my old Windows ME edition!) but I always got the impression that Microsoft made it as difficult as possible to import anything other than Windows Media Video, so that customers were forced into using that proprietary video format. 213.102.128.195 (talk) 22:17, 28 June 2011 (UTC)[reply]

Fantastic :P (Also, why does that not surprise me...?) Any suggestions how to convert to that? Cherry Red Toenails (talk) 05:27, 29 June 2011 (UTC)[reply]

June 28

Computing clouds

It appears that all of the commercially available cloud computers like Amazon's and Microsoft's, etc. are devoted mostly to files and data movement rather than to processing. Is there such a thing as a processor cloud that offers enormous numbers of processors as opposed to file space and data manipulation? I'm referring to processors capable of performing distributed functions and/or subroutines which for example might count duplicates in a generated list of numbers and then return a sum of squares such that the amount of stored data is to small to be an issue whereas the generated numbers might grow exponentially and take a long, long time to process. --DeeperQA (talk) 00:24, 28 June 2011 (UTC)[reply]

Probably not, as storage is cheap. Also, for most problems that need parallel computing, a lot of storage space is needed. The example you give doesn't need a lot of space, but it doesn't need a lot of time, either: counting duplicates and summing squares should be doable in around time (depending on implementation). Algorithm sketch: walk the list, putting each member into a search tree (a hash table would usually work fine, too), noticing collisions. You'd probably need to use bignums, but the overhead in this case should be logarithmic, which is cheap. It should be possible to process a list with a few billion elements in not much more time than it takes to read it off the disk in the first place. I suspect that most algorithms that parallelize well need to have large data sets, so you can break them up in the first place. (The DPLL algorithm, for example, takes exponential time, and doesn't use much space, but I don't think it's very fruitful to try to parallelize it.) Paul (Stansifer) 01:03, 28 June 2011 (UTC)[reply]
See distributed computing; for lots of existing projects, see Category:Distributed computing projects. Two famous examples are SETI@Home and Folding@Home. --Mr.98 (talk) 02:39, 28 June 2011 (UTC)[reply]
It sounds like you are saying that a cloud acts just like a single computer only with a much larger capacity that can handle enormous values and do so very fast, which is the point of splitting an algorithm into pieces so they can relieve a single computer of the need to have such great capacity and in need of dissembling the algorithm in the first place and reassembling the results from each making the use of more than one computer unnecessary.
In fact the algorithm I'm working with is designed to only begin splitting tasks when the values exceed single computer memory and speed limitations which necessitates processing in pieces distributed among the servers.
If the cloud did begin to slow down is there a way to assign pieces of the task to another cloud in the same manner as a Beowulf Cluster client sending pieces to the servers? --DeeperQA (talk) 09:39, 28 June 2011 (UTC)[reply]
A collection of computers is not just one big computer. A cloud environment is not a way to get a larger address space or word size or (usually) more memory. Sometimes (in fact, most of the time), getting "the cloud" involved is not the fastest solution, since a single computer will have come up with the answer by the time that you've finished farming out the initial input data to the cloud that you're using...
...provided that you are using a fast algorithm! If the input is large, the difference between an algorithm with a good big-O and an algorithm with a bad big-O is the difference between getting the answer immediately, and dying of old age before you get the answer. Example: If you come up with a (i.e., exponential) algorithm to solve a problem, there is no supercomputer in existence that can get you the answer for an input of size 300 before the heat death of the universe. But if you change the algorithm to one that takes time (and sometimes this is a small change!), your netbook will spit out an answer for an input of size 300 before you can blink. Change the algorithm to take time, and your netbook will be able to process an input of size 6,000 in in less time. Tweaking algorithms for big-O is far more powerful than just throwing more computer resources at the problem.
If you don't understand big-O notation, just ask. It's absolutely essential knowledge for any computer scientist, and you can't write fast programs without thinking about it. Paul (Stansifer) 11:46, 28 June 2011 (UTC)[reply]
Yes, I ran into big-O when I was writing the Check Sort routine but have not gotten that deeply back into logic or mathematical programming since.
I posted the server side code here but withdrew the question. The algorithm is in fact based on the Check Sort in that it counts duplicates the same way by using the values in the list as an array index whose content value is incremented each time an index value is read from the list. If it is the first time for a particular value then the value in the array at the indexed location become one. If it is the 11th time then the stored value becomes 11. Basically: n(x)=n(x)+one where n is the indexed array and x is the value for which duplicates are being counted.
Doing this in assembler or binary is what I would expect to be the next step to improve speed rather than using a cloud but still staying with a Beowulf when single computer limits would require digging up your bones and reburying with a note that the job had finally completed.. About big-O, I'm all ears, go for it. --DeeperQA (talk) 12:48, 28 June 2011 (UTC)[reply]
I can't seem to find a good tutorial on the Web, so I'll try to give a thumbnail sketch of how to think about big-O. Big-O is a formal approach to the amount of time a program takes to execute (it can be used for other things; it was discovered by mathematicians first, but it's more important to computer scientists and programmers). It abstracts over the little things, like how fast a computer is, or how long an individual atomic operation takes. So, making your program twice as fast has no effect on big-O at all; a 2x speedup is too small to measure! Rather, big-O measures how quickly things get out of hand on large inputs. Where an "operation" is defined to be something that always takes the same amount of time (like adding two numbers, or indexing into an array), you need to pay attention to the way the input size affects the number of operations. For a slightly more practical, concrete introduction, see Wikiversity's page.
In this case (and it's a bad example for learning about big-O, because most of the time, algorithms just depend on input size), your algorithm is slow because it depends on something huge: the maximum number you allow in your input array. In particular, you need to zero-out a huge temporary array to check for duplicates. This is slow, and requires a vast amount of memory. Also, finding a large chunk of contiguous memory may be a problem for the allocator. Instead, you want to store the counts in some kind of associative data structure, like a search tree or a hash table. This way, it doesn't matter what the keys are at all, performance only depends on the size of the input, which will usually be much smaller.
Don't bother with assembly at all (conventional wisdom holds that even an expert rarely writes better assembly than a compiler), and you shouldn't bother trying to muck around with distributed computing ("the cloud", Beowulf clusters, etc.) until you have a good intuition for how to make things run fast on a single processor on a single machine. Paul (Stansifer) 04:37, 29 June 2011 (UTC)[reply]

Big-O efficiency works on the concept of trying to find a way to send more data by parsing a single cycle of light. While big-O is doing this research it is already known that by converting the data to parallel all of it can be sent in one cycle using enough parallel channels. It would be a question of whether the glass was half empty or half full except that sending or processing a virtually unlimited amount of data can still be accomplished in a single cycle and need not wait for big-O to find a way to do so in half a cycle.

A binary search requires sorted data, for example. The Check Sort accomplishes this by using the data values as an address, which it marks (the addressed location that is represented by the data contains a mark) and then printing only marked addresses in sequence. Doing this with hardware is so fast the hardware version is named the “instant sort”. --DeeperQA (talk) 09:48, 29 June 2011 (UTC)[reply]

I'm not sure what you're saying here. Are you trying to claim that big-O notation is somehow irrelevant? It's not a implementation tool, it's the fundamental way to measure the complexity of an algorithm. Algorithmic complexity can't be obsoleted, you have to understand it if you want to write programs that operate on any large quantity of data efficiently. Nothing about having more than one processor operating on data at a time changes this.
In any event, "instant sort" isn't fast. For normal inputs, a ten-line Python program using a hash table should be much faster. Paul (Stansifer) 20:54, 29 June 2011 (UTC)[reply]
... here's a Python program that prints the sum of squares of numbers that appear more than once in the input file. It follows the algorithm that I sketched in small text above:
import sys

occ = dict()
accum = 0
for l in sys.stdin.readlines():
    n = int(l)
    occ[n] = occ.get(n,0) + 1
    if occ[n] == 2: accum += n*n

print accum
It takes half a second to process a 50,000 line input file on my netbook; you can see that there's no reason to bother parallelizing it. There's no upper limit to the size of the numbers it can handle, either. There isn't any magic going on here, all you need to do is use the right data structure for the problem. Paul (Stansifer) 03:08, 30 June 2011 (UTC)[reply]
Absolutely not. What I am saying is that if you have a single perfectly tuned 24 inch wide lawn mower with perfectly balanced razor sharp ceramic blades running at 10,000 RPM it can not compete with 50 lawn mowers which sputter and run around 1,800 RPM with rusted, bent and unbalanced 18 inch metal blades only sharp enough to cut grass.
The only way a perfectly tuned and honed mower as mentioned above can improve speed and capacity is perhaps to use a wider blade or to duplicate itself but big-o stops at the notion of wider blade or duplication.
BTW, have you ever built an instant sort circuit and tested it? --DeeperQA (talk) 10:01, 30 June 2011 (UTC)[reply]
I can't fully understand what you're trying to say. You seem to think that parallelism makes everything faster. Sure, some problems are like lawnmowing; those are called the embarassingly parallel problems, and you can just cut up the input data and hand it off to all the nodes. For everything else, the conventional wisdom in computer science is that mucking around with parallelism for everything else is difficult, and should only be when it's really needed. (Which is not to say that it's even possible for all cases. I don't see a good way to parallelize this problem to make it fast, unless the keys are kept extremely small.)
All I can do is ask: what's wrong with the program that I wrote? It's fast to implement, it runs fast (even on my dinky machine), it's nice and general (Python uses bignums natively, so the count will never overflow, which I notice is not true of your code). Even if the problem were embarrassingly parallel, it wouldn't be worth the effort of parallelizing to speed it up. Simplicity is always the programmer's goal. Paul (Stansifer) 11:51, 30 June 2011 (UTC)[reply]
You are talking about the benefits of using a particular programming language to over come number size limitations. The same benefit of using a particular natural language may be greater when discussing a particular topic. But eliminating number size limits does not eliminate the amount of time for an index to increment from zero to the highest value in the list while comparing values and counting duplicates at each incrementation.
The Check Sort and its hardware implantation called the Instant Sort is the result of an effort to simplify the Shell-Metzner Sort. Write them in any language and in that language (if properly written) will be faster than any other sort with exception perhaps of how much data is being sorted (taken care of by using many lawn mowers)
The binary search routine requires sorted data but can also be performed with distributed software and performed directly with hardware that can be duplicated so the hardware version also can run in parallel.
What I am saying is there is nothing wrong with simplification but once you achieve ultimate simplification it is time to move on and that sometimes means going parallel.
Think of it this way… water may seep through crack in the Earth until it reaches a layer of clay. From there it will simply spread out until it reaches the boundaries of the layer of clay. When you reach the layer of clay start spreading out. --DeeperQA (talk) 12:19, 30 June 2011 (UTC)[reply]
(You can do bignums in any language. And for this task, they're likely to be necessary. They make it slower, but they ensure correctness.)
I don't think that I can explain why one algorithm is faster than another without using big-O notation. But you don't seem to be interested in the possibility that a non-parallel algorithm could be faster than a parallel one in this case, so I guess it doesn't really matter. Paul (Stansifer) 17:04, 30 June 2011 (UTC)[reply]
No not true. I was planning on a client side multiset duplicate counter in C++ or assembler and depending on speed and capacity improvement use it serverside on my Beowulf but it is true that I'm looking for an online service that will provide multiple processors in the form of internal addressable servers so I can do this as is to the max online. I am open as to how to do bignums in VB v6 SP6 under Windows XP. --DeeperQA (talk) 18:10, 30 June 2011 (UTC)[reply]

Ubuntu keyboard restart

Is there a way to restart my keyboard functionality, without restarting Ubuntu? — Preceding unsigned comment added by 88.9.106.0 (talk) 12:57, 28 June 2011 (UTC)[reply]

Not quite sure what you mean. Has your OS stopped responding to your keyboard inputs? --Aspro (talk) 13:15, 28 June 2011 (UTC):[reply]
yes everything Is working except the keyboard88.9.106.0 (talk
Which version of Ubuntu are you using? --Aspro (talk) 13:32, 28 June 2011 (UTC)[reply]
8.04, and the problem happens every now and then.. :( — Preceding unsigned comment added by 88.9.106.0 (talk) 13:43, 28 June 2011 (UTC)[reply]
Try ctrl+alt+f3 then type some stuff, then ctrl+alt+f7. That might fix it. -- Finlay McWalterTalk 13:33, 28 June 2011 (UTC)[reply]
How can I type that, if the keyboard is not working? — Preceding unsigned comment added by 88.9.106.0 (talk) 13:35, 28 June 2011 (UTC)[reply]
Just do it. -- Finlay McWalterTalk 13:51, 28 June 2011 (UTC)[reply]
Finlay McWalter's suggestion is based on the (pretty good) hunch that something is broken with your X server and/or your desktop-manager. A common problem might be a bug in your X server's HID drivers. By pressing those key sequences, you'll swap out and back in (without restarting X), and hopefully "kick" the server back into normal operation. This doesn't really "fix," as much as "work around," the bug. Let us know if that actually worked; if not, we can try to diagnose other possible causes. Hardware or driver failure in your USB system is the next most probable culprit after your X server, in my opinion. Nimur (talk) 15:53, 28 June 2011 (UTC)[reply]
Time to update to the current LTS version. Although you can't see the keyboard doing anything, it doesn't mean the keyboard can't open a terminal. Hence the suggestion Ctrl + Alt + f3 Also, it helps to check that you have still have sufficient free disc space. Empty trash ect. I think a bad bit of RAM might also cause intermittent faults like this.--Aspro (talk) 14:06, 28 June 2011 (UTC)[reply]
OK, I'll try next time the keyboard gets blocked. And update to the new version of Ubuntu (although I suppose it's a hardware problem). 88.9.106.0 (talk) 14:20, 28 June 2011 (UTC)[reply]
Also, note that if you're completely stuck, you can hold down Alt and Sys Rq (the same key as Print Screen), and then hit S, U, and then B (while still holding Alt-Sys Rq down). This will tell the computer to reboot in a slightly gentler fashion than simply hitting the power button does. (see Magic SysRq key, which also includes a slightly longer and even better sequence of commands) Paul (Stansifer) 16:18, 28 June 2011 (UTC)[reply]
Worth trying +k first. ¦ Reisio (talk) 17:48, 28 June 2011 (UTC)[reply]
In case it's relevant, I have a similar problem where the keyboard seems to get disconnected sometimes. I find by switching to a different desktop workspace and back again, the keyboard will magically reconnect. Astronaut (talk) 11:09, 29 June 2011 (UTC)[reply]

Full access to an iPod's file system

Is there anyway that I gain full file system access to a regular iPod be it software on the iPod itself or from my desktop? --Melab±1 14:50, 28 June 2011 (UTC)[reply]

Sure. The music is in a hidden folder. On a Windows machine it's a matter of going to "Show hidden folders"; on a Mac it's a little trickier but it can be done. (You can easily access the hidden folders through the Terminal, for example. Getting the Mac OS to show hidden folders involves some Terminal trickery but it can be done.) --Mr.98 (talk) 15:00, 28 June 2011 (UTC)[reply]
You have full access to the file system. However, you might not be familiar with the type of file-system on your iPod: see How to determine your iPod's disk format from Apple's support page. If your iPod is formatted with HFS+, you may need to use the extended file attributes commands and utilities. You can read about these tools, in the public developer documentation, at listxattr on the Mac OS X Developer library. The xattr utility program is very helpful. Bear in mind that even though the iPod can register itself as a USB mass-storage device class when connected over USB, your iPod is a full-featured small computer, not a hard-disk-on-a-stick; so if you're trying to access its file-system, you should be aware that its operating system is on and in control of the storage-medium at all times, even when it is mounted as a USB "disk." Nimur (talk) 15:41, 28 June 2011 (UTC)[reply]
If I had full filesystem access I would think that I could see the kernel, plists, etc. --Melab±1 20:32, 28 June 2011 (UTC)[reply]
You want access to the iPod firmware? That is kept in a read-only area. You can hack it (Google "hack iPod firmware" — this howstuffworks page gives a nice overview) but it's not a straightforward thing, because the iPod is not a straightforward computer. --Mr.98 (talk) 21:07, 28 June 2011 (UTC)[reply]
If it is updated then it is not read-only. --Melab±1 22:49, 28 June 2011 (UTC)[reply]
I meant read-only in the sense that it is stored in ROM. It's a technical term. It doesn't mean it cannot be written to, just that it is harder to write to. The point is that it's not stored in a straightforward place (not on the hard drive, for example), and not a simple matter of getting "full access" or anything like that. You have to flash the firmware with something else (e.g. a Linux replacement) if you want to modify it, and it means that the firmware probably cannot modify itself. --Mr.98 (talk) 13:24, 29 June 2011 (UTC)[reply]

'Run As Admin' Option Not Appearing

I've been grinning and bearing this for a couple of weeks, but now it's just beginning to annoy me. I am finding that with some programs, the 'Run As Admin' option is not even present in the context menu - it used to be there for practically every executable, as far as I can remember. Sometimes, if I make a shortcut for these programs where the option has disappeared, the option now actually is present in the shortcut, but not in the original. Anyway, this is not normal. I should have the 'Run As Admin' option present in all executables (except one or two exceptions, which are beyond the scope of this question). In these same executables where this option is not present, they also lack the 'Properties' option, meaning I can't do anything from that dialogue box. Can anyone guess what has happened and how can I fix this? - EDIT - My UAC is turned on. Win Vista, Home Premium, 32 Bit. --KägeTorä - (影虎) (TALK) 15:43, 28 June 2011 (UTC)[reply]

This may be too elementary to even bring up, but personally I've had trouble with context items that I've fixed by first left-clicking to select the executable, and then right-clicking to display the context menu. I think of this as letting Explorer "catch up" with me. Comet Tuttle (talk) 23:41, 28 June 2011 (UTC)[reply]
No, it happens in the start menu too. You can't just highlight something in the start menu by clicking on it, as that makes the program run. Thanks anyway, but I think it's more complicated than this. As a bit of extra info, when I have been lucky enough to get a 'properties' option in the context menu, I sometimes now have the 'Run As Admin' option + checkbox in that dialogue box greyed out. --KägeTorä - (影虎) (TALK) 12:32, 29 June 2011 (UTC)[reply]


like a doggy with a stick. before you throw it, you have to wave it in front of their face so they even know you have a fucking stick. --188.28.242.234 (talk) 00:40, 29 June 2011 (UTC)[reply]

what format are rackspace cloud images in?

I took a server image (maybe "snapshot"), it's just lilsted as (name I gave it)cloudserver(numbers).tar.gz.0 at about 2 GB and another 171 byte file called (name I gave it)cloudserver(same numbers as above).yml - what format would these two files be in? Could I download them and run them in a VM on my desktop, if so which one? Thanks. --188.28.242.234 (talk) 21:00, 28 June 2011 (UTC)[reply]

answering own question... apparently http://communities.vmware.com/thread/312288?tstart=0 says: " Rackspace solution seems to be based on XEN servers. So you should be able to extract VM's files from the tar archive and use VMware Converter to import it to your ESXi." --188.28.242.234 (talk) 21:26, 28 June 2011 (UTC)[reply]

How do you get a PDF to work on Word 2010?

Please help, every time i try to open a pdf with Word 2010 it says it is not a valid Win32 application. — Preceding unsigned comment added by 98.71.62.95 (talk) 21:36, 28 June 2011 (UTC)[reply]

MS Office applications like Word can't open .pdf files, even though they are able to create them. You should use a .pdf reader, like Adobe Acrobat Reader or Foxit or suchlike. --KägeTorä - (影虎) (TALK) 22:42, 28 June 2011 (UTC)[reply]
See also List of PDF software, which isn't a mere list but instead says which is which. Tama1988 (talk) 08:07, 29 June 2011 (UTC)[reply]

PHP MyAdmin help

When i try to export a database, i get it PARTLY, never mind if it's ZIPPED\GZIPPED or a single file, it's always partly, but always 25% partly, it's seems the exportation stops before it really ended the procedure.., and it's always stops in the same point... it's more than a week like that, so annoying!!.. thanks, Beni. — Preceding unsigned comment added by 79.179.8.59 (talk) 23:09, 28 June 2011 (UTC)[reply]

Is PHP timing out? In your php.ini file, there is a setting for max execution time. It is usually around 30 seconds. A complete MySQL dump of a large database can take much longer (mine takes about 30 minutes to dump). You can set the maximum time to 0 to disable it and see if that helps. -- kainaw 12:38, 29 June 2011 (UTC)[reply]
Kainaw probably has the correct cause. If you do not have the ability to alter the timout (or simply don't want to), you can split your export into x pieces. Either by exporting each table individually, or if it is a large table, export the first 25% of the rows in one dump, the second 25% rows in second, etc. 72.159.91.131 (talk) 18:51, 29 June 2011 (UTC)[reply]

June 29

Playing Downloaded .cue & .flac files

I have downloaded some music (my first time) and find I have a .cue and a .flac file for the piece I downloaded. What needs to be done for one or both of these to become a playable sound-file on my Windows media player? Can anyone help a musical newbie please? Gurumaister (talk) 07:30, 29 June 2011 (UTC)[reply]

Sorry, I don't know the answer, but this looks helpful. Tama1988 (talk) 08:12, 29 June 2011 (UTC)[reply]

Moderate (or just slight) network security for dummies

Or anyway a dummy: me. Sorry, there's something about networks: whenever I try to read how they work, I get totally confused and give up. Anyway, I have a dumb wired/wireless security question.

I work in a large organization. There are ethernet sockets all over the place. Anyone can bring any computer, plug it in, and get to the web. Here's the process: Point your browser to any URL, and up will pop the log-in screen. Feed that your ID and password, and you're in -- thereafter you can browse and download freely. (Surprisingly, you can do this for two computers at the same time. Though there seems to be some prohibition of bit torrents.)

I use this for such purposes as online purchases, when I divulge my credit card details. (Of course only with https and when I see the browser's little closed-padlock icon.) Is this stupid of me?

There are also wireless LAN access points all over the place. There's no encryption at all -- you too could stroll in (or sit in your car outside) and you'd get to the log-in screen. Even when I'm "https, closed-padlock", I avoid giving credit card details, etc, when connected wirelessly. Is it stupid of me to be worried?

The organization has just added a second wireless LAN (or second SSID). This one is password protected. However, it's a single password for everyone, and there's effectively no protection of the password. (If you wanted it, you could easily find it online.) I don't follow the logic here (but recognize that I'm an ignoramus). Does (A) an SSID that's encrypted but whose password is public knowledge have any advantage over (B) an open SSID? Tama1988 (talk) 08:04, 29 June 2011 (UTC)[reply]

Your primary question appears to be: Is https security safe? The quick answer is: Yes. The deal with networks is that it is layered. On the wires themselves, you have little 1's and 0's moving along (technically, they are electrons). From endpoint to endpoint, the hardware speaks a specific language. This is a packet-based language. One item of hardware will grab an imaginary packet (or envelope), put some information in it, slap an address on it, and send it to another one. The wires see 1's and 0's. The end hardware sees packets. Your software on your computer speaks in streams of data, not packets. So, you send an email. Your whole email streams out to the hardware. The hardware chops it up into chunks that will fit nicely into a packet. Each packet is sent off to the end recipient hardware. On that end, the hardware takes all the information out of the packets and streams it to some software on the other end. It should be obvious here that the little packets don't all take the same route from one place to another and don't necessarily travel in order. So, if I grab a packet off the internet, what do I have? I have a small chunk of some stream from some program. Useless. I have to get close to your computer's hardware to grab all the packets and then I can put them back together. So, there's another thing that you can do. You first contact the server you want to send something to. You put a secret key in a lockbox and put a padlock on it. That is sent to the other server. The other server puts its padlock on it and sends it back to you. You remove your padlock and send it back to the server. The server removes its padlock and reads the secret key. This could go the other way (the server sends the key to you), but what is important is that there is never a case where the secret key is travelling over the network without being locked (encrypted). Now, you and the server have a secret key that is very long and that you will use only one time. You encrypt your message with it. Stream the encrypted message to your hardware. The hardware packets it and sends it as 1's and 0's over the wires. Eventually, the server gets it and decrypts it. That is a very loose description of https. If someone were to be watching the 1's and 0's, they have many hurdles. First, they need to get all the packets. Then, they need to order them. Then, they need to decrypt the message - which is the hardest part. Because the message is short and the key is huge and never used again, decryption is nearly impossible. Notice that none of this has anything to do with being wired or wireless. -- kainaw 12:56, 29 June 2011 (UTC)[reply]
OK, I got it--as far as https goes, at least. (And yes, I do check all the signs that I really do have a proper https connections (not that I'd previously understood what that meant). Tama1988 (talk) 00:19, 30 June 2011 (UTC)[reply]
Windows Network Security. Pick two. Googlemeister (talk) 14:45, 29 June 2011 (UTC) [reply]
Network and security. I don't see how the OS is directly relevant to my question, but FWIW I don't use Windows. Tama1988 (talk) 00:19, 30 June 2011 (UTC)[reply]
As Kainaw said HTTPS pretty much guarantees no one in between will be able to decrypt your content without the cooperation of either end of the connection. So the issue is not so much do you trust the network but do you trust either end? The bank or whatever should not be a big concern. But what about the computer you are using? It's not clear to me if this is a personal computer or a work computer. If it's a work computer, do you trust whoever has adminstrative powers not to do something dodgy, e.g. install a keylogger or install a modified browser which makes you think you are visiting a secure site with a secure certificate verified by one of the normal root certificate agencies? Do you trust them to ensure no one other then them can modify the computers either locally or remotely (whether directly or with malware) to do one of those? If it's your own computer do you trust yourself to ensure your computer is secure and no one can do something like that? The 'remotely' part is perhaps important here, are your firewall and network security settings properly set up? Some OSes may give lower security on a LAN thinking the other computers are more trusted, but this is probably a bad idea in a work place (although not as bad as a public wifi access point or if you have dodgy flatmates since the network admins will often at least try to restrict what goes on in the LAN and there would be greater concern of the legal risks). Nil Einne (talk) 15:16, 29 June 2011 (UTC)[reply]
It's one or other of my computers. Nobody else has ever touched any of them. They're running Debian stable, fully updated and only from the right repositories. (There may be a common idea that Debian is for computer experts but it was pretty easy to set up and anyway I am no computer expert. And I know next to nothing about networks.) I haven't encountered any obvious restriction on what goes on via the LANs other than the prohibition of torrent downloads; the article on firewalls talks about "packet filtering" and maybe there's some of that; I wouldn't know.
Back to the first question. An institution has (A) a wired LAN that requires a personal ID and password. It also has (B) a wireless LAN with no additional requirements. It then adds (C) a second wireless LAN (or SSID) that requires a password for any use (before you get to the personal ID/password web page), but makes this additional password pretty public. "Common sense" tells me that (C) has no advantage over (B), but am I missing something here? Tama1988 (talk) 04:42, 30 June 2011 (UTC)[reply]
If the password is associated with WPA then the over-the-air content will be encrypted and even your non-https traffic with be difficult to impossible to read using wireless sniffers. --Phil Holmes (talk) 11:26, 30 June 2011 (UTC)[reply]

How to creat a mobile phone app

So, I've got an idea for a mobile phone app. I've looked extensively through other similar apps, but have not found anything quite like what I envision. Without saying too much, the concept of the app is very similar to existing apps...just the "purpose" is different, and is what makes my idea unique. Anyway, a number of trusted friends I've talked to about this think its a really good idea, and something they would probably pay for. The problem is, I have no knowledge of even where to begin. Is there a good resource to teach myself how to create an app? Quinn BEAUTIFUL DAY 16:18, 29 June 2011 (UTC)[reply]

It depends on the phone. iPhone apps are programmed in Objective C and you really need a Mac; then you get the iPhone SDK from Apple. Android apps are written in Java or C and you get the Android SDK from Google. J2ME apps are written in Java, and you get the SDK from Oracle. All three are fairly different environments and porting code from one to another isn't a trivial matter. -- Finlay McWalterTalk 16:28, 29 June 2011 (UTC)[reply]
I suppose since I have an Android, that I would go that route. What is an SDK? Quinn BEAUTIFUL DAY 16:29, 29 June 2011 (UTC)[reply]
Software development kit. -- Finlay McWalterTalk 16:31, 29 June 2011 (UTC)[reply]
All of the phone platforms have developer guides like the Android Developer Guide. -- kainaw 16:30, 29 June 2011 (UTC)[reply]
Perfect! Thanks! Quinn BEAUTIFUL DAY 16:53, 29 June 2011 (UTC)[reply]

Ad problems, the sequel

Referring back to this question, I have new information to add.

Some of the ads still appear, but not after I click any more. They gradually cover the screen after I have been at a particular location for a few seconds, and the "Close" link appears quickly (this didn't happen before) and works quickly enough (it was slow to do anything). Once, though, I thought I was clicking on "close" and ended up clicking on the ad itself, which was annoying. Anyway, these particular problems seem to be resolved.

There was one case where the "close" link was hard to find, and it said "roll over", whatever that means. So someone needs to explain themselves better.Vchimpanzee · talk · contributions · 18:21, 29 June 2011 (UTC)[reply]

The people who run the websites get paid if you click on an ad. It is in their best interest to do everything possible to make you click on the ad. So, why would they make the ad small, out of the way, and give it a big "Close Me" button? The solution is to stop visiting sites that use annoying ads. If nobody visits them, they will look for another way to make money. -- kainaw 18:36, 29 June 2011 (UTC)[reply]
I'm not going to stop visiting such sites. They need to be nice to their users. A big "Close me" button is the only way to do that when what you want has been covered up without your permission. They will hear from me otherwise and be told I will avoid doing business with any of their advertisers.Vchimpanzee · talk · contributions · 19:26, 29 June 2011 (UTC)[reply]
I'm certain they'll be impressed. --Tagishsimon (talk) 20:03, 29 June 2011 (UTC)[reply]
Of course, Jack in the Box and Citi have poisoned their reputations (in my view).Vchimpanzee · talk · contributions · 20:05, 29 June 2011 (UTC)[reply]
I just visited their sites and Idon't see any particularly obnoxious ads. Are you sure those ads are theirs? With respect to "I won't stop visiting their sites... I'll avoid buisness with their advertisers". What happens is you visit sites and click ads, on accident maybe, but that is you "doing business with them" and probably the only business they expect to get anyway. So they're already winning, so is the site you are visiting, who gets paid by the advertiser, and you lose by having yourself annoyed. The solution is either to use an ad blocker or stop interacting with sites who do this, ie don't go to that site. Of course, I'd verify it was the site popping these ads up, and not some sort of spyware/virus. Chris M. (talk) 20:38, 29 June 2011 (UTC)[reply]

I've already stated it's up to the library whether to use an ad blocker. And I did say the obnoxious ads no longer seem to be a problem.Vchimpanzee · talk · contributions · 21:14, 29 June 2011 (UTC)[reply]

Roll over usually means move your mouse cursor over the ad for it to popout or display something. In such a case avoid moving your mouse over the ad. Nil Einne (talk) 23:08, 29 June 2011 (UTC)[reply]

If these are library computers I suppose they are using Windows. (Why should a library use free software when it can instead use software that costs money?) If so, I'd click Alt-F4 at the sight of any obnoxious pop-up. There's no guarantee that either "Close" or a little "x" on a junk window will work to close it.

Further, if you anyway have a computer and sites look bad on computers over which you have no control, then prepare your own computer properly and thereby view garbage-free versions of the sites. Tama1988 (talk) 08:21, 30 June 2011 (UTC)[reply]

Alt-F4 will close the whole browser including any tabs (if the OP ever gets round to using them) which isn't what the OP wants. And it's also a little silly to get so paranoid about ads, I probably spend less time closing them then people do worrying so much about them and I've never found the need to use a popup blocker or ad remover other then what comes with my browser (even when visiting dodgy websites). Also with the vast majority of ads on non dodgy websites, clicking the x or close will remove it. Note we are not talking about popup windows here as explained in the op's earlier post but ads which display in the current browser window sometimes hiding the content completely until they are closed (or your browser doesn't visit the actual content until the ad is closed). The op apparently does not want their own computer and from memory of their user page needs to be careful with money so telling them to get one just to view news websites, even despite their persistent posts about issues with library computers is about as helpful as telling someone who is looking for reviews of US health insurance companies for individuals that they should move to Canada. Besides that from all the comments we have seen from the op it's questionable if a computer they have to administer themselves will be any better. Nil Einne (talk) 12:14, 30 June 2011 (UTC)[reply]
If you've seen my user page you know I have a computer. I don't use it to go to these problematic sites. It's not silly to get paranoid about ads because if they cause these problems it makes things very complicated. I don't need any further complication. And I'm not going to close the browser and I do not, will not, use tabs. I don't know how to get through to everyone. My mind does not and will not work that way. Windows are at the bottom of the screen. That's it.
And as I've said the problems seem to be resolved for now.Vchimpanzee · talk · contributions · 13:45, 30 June 2011 (UTC)[reply]

HFS+ drive backed up onto FAT32 drive

A while back my family's iMac started experiencing troubles and we had to reinstall OS X. Before that I used Data Rescue II to back up the clone the contents of the disk (200+ GB) to a .dmg on a FAT32-formatted external hard drive. The .dmg is 4 GB—obviously nowhere near 200 GB—and cannot be mounted. I think that Data Rescue may have split it up into blocks and scattered them through out the drive, which is a Western Digital 1.5 TB My Book Home Edition. Viewing the information about this drive with Disk Utility reveals that it is 1.4 TB (which Disk Utility shows as 1,500,301,910,016 bytes—marketing, huh, 1.5 terabytes with 1 terabyte being equal to 1000^4 bytes instead of 1024^4 bytes) and that its capacity is 1.36 TB, a discrepancy. Could my "scattered" backup somehow be in this discrepancy? --Melab±1 18:47, 29 June 2011 (UTC)[reply]

Marketing does not use powers of 2 when using MB, GB, TB, etc... They use "thousands" increments. If you want to be technical, it was the computer programmers and hardware engineers who incorrectly called 1024 bytes a kilobyte. So, all you have here is a disagreement between what you want kilo to be and what the mareting department is calling kilo. As for the actual size discrepancy, it is expected that the overhead required by the filesystem will take up room on the disk. The larger the disk, the more overhead you have because there is a lot of indexing and free-space management going on. Your data shouldn't be anywhere in the disk overhead area. As for having a file scattered around a drive - that is normal. There is no reason that a drive must keep a file in one continuous chunk. It is nice for humans to find a file in one piece, but a computer will happily hop around the drive grabbing up the little chunks to make one big chunk. -- kainaw 18:57, 29 June 2011 (UTC)[reply]
I know that marketing uses powers of 1000. Using the numbers that Finder displayed, I have around 74 GB on it. Remember that I backed this up into a DMG which reached the 4 GB limit of FAT32 files and I also did a deleted files recovery with Data Rescue 3 a few months later, probably contributing to the 74 GB. What I want to know now, is can I reconstitute the old backup? --Melab±1 19:26, 29 June 2011 (UTC)[reply]
How can I reconstitute this back up? --Melab±1 22:40, 29 June 2011 (UTC)[reply]
Make sure you show all files including hidden, system and protected files and look at the disk. Do you see any other files other then the single DMG? If not there's a very good chance the backup is not there, probably not carried out properly in the first place. Nil Einne (talk) 23:05, 29 June 2011 (UTC)[reply]
A few months after this I stored some recovered files on drive after I accidentally deleted a VirtualBox snapshot. --Melab±1 23:36, 29 June 2011 (UTC)[reply]
What is the possible compression ratio of DMG files? --Melab±1 00:43, 30 June 2011 (UTC)[reply]
That would depend entirely on the source content, Apple Disk Image says bzip2 (and others) is supported. Unless your content was nearly completely text files or something similar, a 50:1 compression ratio is very unlikely. It is also unlikely storing further files on the drive removed your backups although if the backup was done but somehow deleted it could have greatly reduced your chance of recovery. However I still go by my earlier comment, if there are no other files on the disk that you don't recognise there is likely no further backups and the most likely possibility is your backup wasn't carried out properly. I realise I forgot to mention this but you should always verify your backup before you need it, which means amongst other things, make sure any backup you carry out is done properly. Nil Einne (talk) 12:02, 30 June 2011 (UTC)[reply]

Shell script help

Resolved

Hi. I'm writing a little shell script and can't quite find the right way to do it. I have a gazillion files. In each file, each line is comma separated with words and numbers. But in a file, each line has a different number of entries, and the numbers and words are different lengths. I want to extract the number in the nth row, mth column of each file. I can get the nth row with "sed -n 2p" but I can't extract the mth entry on that line (which is a number). What is the best way to do this? Thanks, Robinh (talk) 21:32, 29 June 2011 (UTC)[reply]

You could use awk, as in this example: echo "hello,there,folks" | awk -v foo=2 -F"," '{print $foo}'
Where -v foo=2 sets the awk variable foo to 2; you'd set it to whatever m you wanted
and -F"," sets the field delimiter to a comma
-- Finlay McWalterTalk 21:47, 29 June 2011 (UTC)[reply]
A slightly more succinct version of which is awk -vm=1 -F, '{print $m}' -- Finlay McWalterTalk 21:48, 29 June 2011 (UTC)[reply]

Put more simply: awk -F, '{ print $m; }' ¦ Reisio (talk) 22:00, 29 June 2011 (UTC)[reply]

Thanks guys. Works perfectly. I've never really got to grips with awk. Cheers, Robinh (talk) 22:12, 29 June 2011 (UTC)[reply]
That's a workable solution, but if speed matters, it would be quicker to use cut rather than awk; that is, cut -d "," -f $m. Looie496 (talk) 23:58, 29 June 2011 (UTC)[reply]
Heh, even better! thanks again, Robinh (talk) 01:12, 30 June 2011 (UTC)[reply]

web designing tutions online for beginners [80 years old, Me]

I am Sunder Thadani, a netizen from Mumbai [age-80 years]. I wish to learn web designing. I need help/guidence on online tutions from the web [fee-free]. I have lates PC with Windows 7 and Office 2010 as platform. My knowldege on computer is good since I am using Net since 1999. Your attention will be highly appreciated. Sincerely, Sunder Thadani — Preceding unsigned comment added by Sunder360 (talkcontribs) 22:53, 29 June 2011 (UTC)[reply]

http://wsc.opera.com/ http://www.htmlhelp.com/ http://css-discuss.incutio.com/ http://www.brainjar.com/ http://www.htmldog.com/ http://css.maxdesign.com.au/ http://www.alvit.de/handbook/ ¦ Reisio (talk) 23:12, 29 June 2011 (UTC)[reply]

W3Schools is very helpful. --Melab±1 23:37, 29 June 2011 (UTC)[reply]

Or was it harmful? http://w3fools.com/ ¦ Reisio (talk) 23:45, 29 June 2011 (UTC)[reply]

HTMLdog is also helpful. Unlike Resio, I also find w3Schools to be extremely helpful. TheGrimme (talk) 13:44, 30 June 2011 (UTC)[reply]

No doubt because you aren't informed enough to identify it as harmful. Check out the link. ¦ Reisio (talk) 18:29, 30 June 2011 (UTC)[reply]

June 30

Creating a mobile phone app part 2

Crap. So, I read through a good portion of the Android tutorial on creating mobile phone apps, and found out very quickly that it goes way over my head a short way into it. So, what if I want to recruit someone to develop this mobile application for me...what's the best way of going about it? I don't have any close friends that have this type of skill set, so I am looking for the best way to recruit someone; the going pay rate; should I offer a share of any future profits; and, most importantly, establishing myself as the sole copy right holder? Basically, I want to do this in a professional way, and minimize the risk of someone "stealing" my idea, or taking control of it in the future. Is this something that would require legal counsel to come up with a contract....or can I cover my ass through a basic "contract labor" agreement? Opinions welcomed. 03:18, 30 June 2011 (UTC)

Is this an application you've already developed and just need to port to the Android platform ? Or is it just an idea ? If so, it may or may not be possible to code it. You might want to run it by a programmer you trust to see if it's possible, before hiring somebody to do it. StuRat (talk) 03:49, 30 June 2011 (UTC)[reply]
Here are some ideas:
  • Find some Android development online communities/forums, create a posting that you will pay for somebody to write an app for you. Give details, don't be vague. Check their prior projects.
  • Go to a local college Computer Science department, and tell them you would like to create a contest and educational opportunity for students.
  • Find a few (small) apps in the Android app store, go to the websites of the authors, and ask if they would be interested in writing an app for you.
Good luck.TheGrimme (talk) 13:31, 30 June 2011 (UTC)[reply]
There are companies that develop and promote phone app ideas, such as IDC Projects. -- kainaw 13:42, 30 June 2011 (UTC)[reply]

Phone apps, business side

How do developers of phone apps get compensated ? StuRat (talk) 03:57, 30 June 2011 (UTC)[reply]

They probably get paid by the companies which employ them to write their code or come up with ideas. Or are you referring to something more intricate, like compensation based on the volume of sales of a particular piece of software? I was this close to writing something like by bank transfer. --Ouro (blah blah) 08:41, 30 June 2011 (UTC)[reply]
If by "developers" you mean the organisation (company) that creates and publishes an app, are several ways:
  • paid downloads (an end user pays to download and install the app); its common to have a limited free edition, and a more fully-featured paid edition (or the free edition is trialware)
  • ad supported (both Apple and Google have ad networks that share revenue with apps that host their ads)
  • affinity or revenue sharing - for things like shopping or ticket booking or hotel finding, the app developer has some agreement with the network that's actually selling the product, and they share some of the revenue
  • subscription (for something like a MMORPG)
There are also several classes of app which wouldn't expect to yield revenue themselves:
  • where the app is an adjunct of a larger system (so, for example, if I create a corporate groupware system, I might have an iPhone and Android app for it, but I make the money off the groupware server licence)
  • where the app is a promotional item
If by "developers" you really mean those who create the app, assuming they're not publishing it themselves, then it's just the same as other for-hire coders, designers, and artists. -- Finlay McWalterTalk 08:45, 30 June 2011 (UTC)[reply]

Actually, I do mean independent developers that publish apps themselves. Is there some arrangement (like calling 976 numbers) where the phone bill includes those fees, and they are then passed on to the developers, or must every app developer make their own billing arrangements with customers ? StuRat (talk) 13:40, 30 June 2011 (UTC)[reply]

The Apple App-Store and the Android Market act as a marketplace for those people; they collect the money, take a (large) cut, aggregate the result, and send the developer what's left. I honestly don't know about where VAT / sales tax is collected. Many Android phones allow loading of apps by other means ("sideloading"), and both Android phones and iPhones can be jailbroken (so apps can be installed from anywhere) but selling an app in such a case (where the app-store's usual protections aren't available) without it being copied for free is difficult. -- Finlay McWalterTalk 13:48, 30 June 2011 (UTC)[reply]
I see. So how much is the developer's cut ? StuRat (talk) 13:50, 30 June 2011 (UTC)[reply]
For Android Market the developer gets 70% (ref). -- Finlay McWalterTalk 13:52, 30 June 2011 (UTC)[reply]
App Store (iOS) says it's the same percentage (but Apple also charges a fee for signing apps, and I think there's a fee for them reviewing your app before they'll put it in the app store). -- Finlay McWalterTalk 13:55, 30 June 2011 (UTC)[reply]
Thanks:
1) Do apps bill for a one-time purchase, a monthly rental, or per use ?
2) What ranges of prices are there ?
3) How are (free and non-free) app updates/upgrades handled ? StuRat (talk) 14:00, 30 June 2011 (UTC)[reply]
  1. One time. But apps can do "in app billing" (which is where the user performs a transaction using the app, to see "buy" a digital good like a magic sword or a music track) - see [9]
  2. Android's ranges are here. The bottom of the range is essentially determined at the point at which it's cost-effective for Google to make a credit-card transaction.
  3. In Android, free is free. If I understand iOS App Store correctly, free apps still have to pay to be signed and reviewed. Apple's info about their App Store distribution is here.
-- Finlay McWalterTalk 14:08, 30 June 2011 (UTC)[reply]
Is that in addition to the $99 fee you pay for the iOS SDK/join the developer program [10]? Note that the Android marketplace link seems to suggest they collect sales tax or GST/VAT in some countries. Nil Einne (talk) 18:57, 30 June 2011 (UTC)[reply]

Source of online topographic map

Does anyone know where this map is coming from: http://www.startribune.com/newsgraphics/124520694.html

It's a detailed topographic map. If you zoom in far enough, it shows individual house numbers, decks and sheds. Far better than other "free" online maps.

Samw (talk) 03:34, 30 June 2011 (UTC)[reply]

Do you mean the outlines of houses and garages ? That's all I see, not house numbers, sheds, and decks; but perhaps the level of detail varies by location. StuRat (talk) 03:43, 30 June 2011 (UTC)[reply]
Hmm, it looks like American cities have restricted info. Try Vancouver or Toronto. Samw (talk) 15:32, 30 June 2011 (UTC)[reply]
There's a logo in the bottom-right, that of GIS company ESRI. THe underlying topographic info will likely be a general topo dataset (like Shuttle Radar Topography Mission or something from USGS) that's been rendered with ESRI's map engine. With enough skill with GIS tools its possible to create something of comparable quality - consider File:Antelope Island State Park Map.jpg for example, created with ESRI's ArcGIS. -- Finlay McWalterTalk 08:53, 30 June 2011 (UTC)[reply]
Sorry I didn't notice the obvious ESRI logo. Yes, their website has the equivalent top maps. Hopefully that is permanent as this is the best "free" top map I've seen on the web. Thanks! Samw (talk) 15:32, 30 June 2011 (UTC)[reply]
As to the other layers, most developed countries are covered by several GIS layer providers who sell layers for buildings, electrical connections, water pipes, roads, sewerage, hydrology, rainfall, flood risk, fire risk, crime level, population, and so forth. Subscriptions to these services are often fairly expensive. It looks as if the Star Tribune has such a subscription, and employs (or perhaps subcontracts) someone with sufficient skills to make something informative and attractive out of the complex datasets. -- Finlay McWalterTalk 10:31, 30 June 2011 (UTC)[reply]
You don't have to go far out of the larger cities, to lose the building detail. eg: the building detail for Paris stops just outside the Boulevard Périphérique and the detail around New York City stops just past Newark Liberty Airport. Astronaut (talk) 15:18, 30 June 2011 (UTC)[reply]
I wonder how they got that level of detail. If could be computer generated from satellite images, I supposes, but I'd expect more mistakes, then. Doing the whole thing by hand would be prohibitively expensive. Perhaps it's computer generated and then reviewed by humans, to catch the blatant mistakes. StuRat (talk) 17:43, 30 June 2011 (UTC)[reply]

Thanks everyone for the quick response! Samw (talk) 15:32, 30 June 2011 (UTC)[reply]

A lot of USGS maps (that are still useful) are very old, and were in fact made by humans. ¦ Reisio (talk) 18:27, 30 June 2011 (UTC)[reply]

Processor cloud

Anyone know of an online service that offers the user/subscriber access to multiple processors (thousands) presumably by assigning each processor a unique address just as memory addresses are assigned in order to support programs that have subroutines which are intended to run in parallel and not sequentially? --DeeperQA (talk) 10:28, 30 June 2011 (UTC)[reply]

I don't know of any. Sounds like you want to rent a data center! What theoretical use do you use for it? Trying to run Crysis? :) --24.249.59.89 (talk) 14:49, 30 June 2011 (UTC)[reply]
It's not practical to build computers with that architecture. The problem is that, when you have a whole lot of processors talking to one uniform pool of memory, performance goes down and cache coherence becomes a nightmare. This is why non-uniform memory access is being developed. Paul (Stansifer) 17:12, 30 June 2011 (UTC)[reply]

Harvesting HDD from an external enclosure

I've noticed that when some external retail hard drives from a manufacturer goes on sale, the cost is actually less expensive than their new bare internal OEM drive of the same capacity. Is there any issue of disassembling the external case to liberate the bare drive? They aren't hard soldered to the USB bridge, are they? --24.249.59.89 (talk) 14:43, 30 June 2011 (UTC)[reply]

I've never encountered one that was. Every one I've opened has been a run of the mill desktop or laptop drive, with a conventional PATA or SATA connector, and a USB interface chip. -- Finlay McWalterTalk 14:46, 30 June 2011 (UTC)[reply]
Same here, they've all been attached with the same SATA or IDE connector. Can be kind of a bear to break into though sometimes. RxS (talk) 21:06, 30 June 2011 (UTC)[reply]

projected and non -projected visuals

please what are the differences between projected visuals and non projected visuals — Preceding unsigned comment added by Kofibarwuah (talkcontribs) 14:51, 30 June 2011 (UTC)[reply]

I'm guessing this is in an educational context. If so, projected visuals are those which are, err, projected, such as overhead projectors, slide projectors, digital projectors. By a process of elimination, I guess non projected visuals are such things as handouts, books and objects best appreciated at desk by individuals rather than viewed en masse by the class. --Tagishsimon (talk) 15:06, 30 June 2011 (UTC)[reply]
I'm guessing you already know what they are, and are asking about the difference in quality, between, say, a projection TV and a non-projection TV, such as LCD or plasma. Here are some diffs:
1) While the projection could actually be brighter, if projected onto a small enough screen, it's typically dimmer, requiring a darkened room to view it.
2) A projected image can be distorted, depending on the location of the projector and shape of the screen. Similar distortions were also possible with a CRT TV (such as a the keystone effect), but aren't with newer types.
3) If the projector does the colors separately, you also have potential for colors to be misaligned. Again, this could happen with a CRT, too.
4) The projected image may be out of focus.
So, overall, a projected image is inferior, and is only used when the size of the screen is too large for other methods. StuRat (talk) 17:33, 30 June 2011 (UTC)[reply]

Finding out why my laptop rebooted

Hey guys, I was wondering if there was a way to find out why my Windows Vista laptop rebooted itself during the night. I left it on so I could download some files overnight, but when I woke up this morning, I found that nothing was going on. My download manager was closed (the files hadn't downloaded either), and I checked on Task Manager to find that the computer had restarted itself around 3:33 AM. Is there any way I can find a reason for this reboot? I first suspected it may have been overheating, but usually when it overheats it simply shuts down, not restarts. I've checked Event Viewer but haven't found anything conclusive (or maybe I'm reading it wrong). Any ideas? Thanks. 141.153.214.125 (talk) 17:32, 30 June 2011 (UTC)[reply]

I know you've said you've looked, but Event Viewer is likely where you need to look. The "System" log will at least show startup events so you'll see what time it started up. Look out for any Windows Updates as possibly you have your settings set to automatically install (and reboot). Or look out for a "WIndows recovered from an unexpected shutdown" which basically means a blue screen happened (it could also be caused by just turning off the power, but obviously you didn't do that). Actually, possibly you had a slight powercut maybe, enough for the computer to reboot. Sorry I'm not sure I'm being much help, but System and Application logs in Event Viewer are the two which will give you the clues.  ZX81  talk 18:27, 30 June 2011 (UTC)[reply]
Checking all nodes of the Event Viewer is probably the best route. I would would also disable the feature where Windows automatically restarts the computer in the event of a fatal error, which might be happening. See this article on how to disable it in Windows 7, Vista, and XP. Additionally, you might find more help in the Computing section of the reference desk. TheGrimme (talk) 20:38, 30 June 2011 (UTC)[reply]
A couple of my machines have whinged at me to be rebooted over the last 24ish hours, having downloaded an MS security patch. It's possible your reboot was caused by that. I'm not sure what forensics are available in that department. --Tagishsimon (talk) 21:10, 30 June 2011 (UTC)[reply]
I was just going to say "security patch", too, because Windows machines are configured by default to restart after automatically downloading certain types of patches from Windows Update in the middle of the night; and Microsoft has issued several patches over the last week. If you go to the Control Panel and type "Windows Update" (don't hit Enter!) in the "Search Control Panel" field, you'll see an item called "Turn automatic updating on or off", if you are interested. Comet Tuttle (talk) 22:03, 30 June 2011 (UTC)[reply]

firefox/ fedora 7 not behaving

Hi, I'm currently using Fedora 4, and I recently got a copy of the Linux Fedora 7 dvd/ live cd, and had a go at using the live cd. It doesn't recognise my printer, and firefox crashes every time I right click, whether on a link, or on a blank part of the page. My current (Fed 4) set up is a nuisance, because it doesn't recognise my printer, evolution email doesn't delete messages properly, it won't copy to cdrom, won't do most music and video apps, and firefox is highly sensitive to javascript content (wikipedia crashes with javascript on if I try to edit). Further, when I turn on the computer, it will only recognise the modem about 50% of the time, otherwise giving me a "SIOCSIFNETMASK" message saying the modem is unreachable. Nevertheless, I can live without a printer, as I hardly ever use it, I can reboot the system and it will recognise the modem second time round, and the other problems I can put up with. I want to upgrade to get a slick system that works in all these details, but it looks as though things are getting worse, and I'm not sure about taking any risks.

Firstly, if I upgrade, will Linux install if I change my mind and try to downgrade (putting Fedora 4 over a later version)? Secondly, does anyone know if the firefox crash (and printer woes) might only pertain to the live cd version I used (and not be found on the full install)? Finally, what's happening here? Surely someone could have tested Firefox and found it was just crashing all the time. Does it get better with later releases after Fedora 7, because I'm finding them hard to come by (not available in libraries and bookstores)? Thanks, It's been emotional (talk) 17:36, 30 June 2011 (UTC)[reply]

How do you have internet access? The latest releases of Fedora are comparably easily downloadable. Is your printer less than four years old? Fedora 7 stems from 2007, so it's not that old, and yet four years in computer science almost seems like forever. My first recommendation is to download a newer version of Fedora if possible and try to install that. Oh, what are the specs of the computer you are using? Cheers, Ouro (blah blah) 21:25, 30 June 2011 (UTC)[reply]

Difference between computers and humans

For me it's clear that routine tasks are performed better by computers, but what other differences are between human and computer processing? Wikiweek (talk) 21:02, 30 June 2011 (UTC)[reply]

If you're not familiar with it, a read of Chinese room might be instructive, as might links therefrom. --Tagishsimon (talk) 21:05, 30 June 2011 (UTC)[reply]
Bio-inspired computing and its links might also be interesting. Note: You're incorrect about your first statement unless you really narrow down what "routine" means. If you mean "adding two integers", then yes; but a "routine" task for a human might be "recognizing the emotion that your mother is currently experiencing, in under one second", and humans currently are far superior to computers at that one. Comet Tuttle (talk) 21:58, 30 June 2011 (UTC)[reply]

Google search for "sex"

Does anybody have a clue why a Google search for "sex" brings up a toplink to Sex (book) rather than sex? --84.44.231.244 (talk) 22:32, 30 June 2011 (UTC)[reply]