Jump to content

Wikipedia:Reference desk/Computing

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 91.155.195.247 (talk) at 15:41, 3 October 2016 (Manipulating images). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.


Welcome to the computing section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


September 28

Average case time complexity of maze backtacker solver

Hello,

Given a simply-connected maze on a 2-d grid (there exist a single path betwene any two points, no loops), how can the average time complexity of a DFS backtracking solution, and of a BFS solution, be found? I understand this is a problem of tree traversal and I have looked at this StackExchange answer, but I failed to understand the direction suggested there. Thanks — Preceding unsigned comment added by 77.126.23.147 (talk) 07:47, 28 September 2016 (UTC)[reply]

Depth-first search and breadth-first search are the relevant articles, but I don't know if they answer your Q. StuRat (talk) 14:43, 28 September 2016 (UTC)[reply]
This is a fantastic question, and I believe its answer is much more difficult to find than might appear superficially. I agree that your StackExhange link is not taking you towards an easy-to-understand answer, let alone a mathematically-sound solution.
It is very hard to determine statistics for number of times you will revisit each square, on average, across the set of all possible simply-connected mazes. But if you can determine those statistics, you have your answer. I'm still thinking of a good way to analyze this problem simply and correctly, and which resources might be good to reference. Nimur (talk) 15:14, 28 September 2016 (UTC)[reply]
I teach this in Big O complexity. This is how I approach it: Assume the solution to your maze is X steps. A breadth-first search will check all solutions with 1 step, then all with 2 steps, then all with 3 steps, and so on. All of those are guaranteed to be a waste of time because we won't find the solution until we get to all paths with X steps. Even then, we will check, on average, half the paths of X steps before finding the solution. A depth-first search will get to X steps faster. Assume the longest path is actually X steps. You will immediately jump to searching all paths of X steps. Depth-first is clearly the way to go... but only in that case. Assume that most paths are longer than X steps. You will be searching a lot of paths that are too long before eventually getting to the correct X-length path. If X is small compared to the average path, you would be better off only looking at short paths - a breadth-first search. Of course, a random maze doesn't make it easy to know how long the solution path is. So, you have to guess. You turn it into a universe of paths. One of those paths is the correct one. How do you trim out the ones you don't want and focus on those you do want. For example, if I have a 5x5 grid and the starting point is on the left and the exit is on the right, I know that the absolute shortest possible path is 5. I can do a breadth-first search starting with paths of length 5. Getting a clear "This is better" answer is not really possible because it is based on the length of the solution compared to the lengths of all possible paths. 209.149.113.4 (talk) 16:06, 28 September 2016 (UTC)[reply]
209..., I like your approach, but my concern rests exactly with this statement: "we will check, on average, half the paths..." Are you certain that this is actually a true statement? Doesn't this depend on some pretty difficult mathematics that describe the topology of a maze? Isn't this detail even more important in the case of a depth-first search, where (because paths in a simple closed maze must never overlap or cross - that would be a loop!) we have certain search paths that guarantee 100% of the sub-paths are unsuitable?
The statistical incidence-rate (and depth) of any such "guaranteed-to-fail" sub-path depends on the specific layout of any individual maze. The question seeks to find the average run-time over the set of all possible simple closed mazes. So we need to do some pretty heavy graph-theory math to analyze this.
So... your quantitative statement about the probability of choosing the correct path might be completely valid, but I am not certain; and I'm still brooding over a good method for proving or disproving it. Nimur (talk) 19:29, 28 September 2016 (UTC)[reply]
My statement that you will check half the paths is correct. Assume you have P paths of length X. You also (somehow) know that the solution is length X. You know nothing more about the maze. So, you have to start choosing paths and testing them. You might find the solution on the first check. You might find it on the last check. Every check from 1 to P will have a 1/P chance of being the one that finds the solution. So, on average, you will check P/2 paths until you find the solution. In class this argument wouldn't come up because we would have already covered sorting algorithms and done many "on average" cases. Therefore, I'm not used to explaining it in better detail. Of course, if you did in fact know more about the maze, you could trim the search space. Assume that I know I can cut the search space in half by kicking out P/2 solutions. I'd have a new P of possible candidates. I would still have to check P/2 of those on average to find the solution. All I really did is replace P with a new P. I didn't change the fact that, on average, I will check half the candidates before finding the solution. 209.149.113.4 (talk) 11:26, 29 September 2016 (UTC)[reply]
That argument assumes that there is only a single solution. If there are more than one, you will, on average, have to check a smaller part of search level X. --Stephan Schulz (talk) 11:56, 29 September 2016 (UTC)[reply]
The assumption of a single solution seems appropriate for a "simple, closed" maze, unless I misunderstand that definition.
209's justification seems solid enough... I'm convinced that he has demonstrated "50%" as an upper bound.
I assert that it is possible to get an even tighter algorithmic upper bound, because each path searched has the potential to reveal information about the maze topology. Testing some paths, and failing to find the solution, may provide sufficient geometric information to guide the selection of the next test-path, which improves the method beyond relying "random chance." I further propose that with enough effort, you might be able to use this to put an even tighter algorithmic upper-bound on such a depth-first-search where each subsequent path choice is guided by maze geometry information learned from previous path choices. That's the only bit I am not certain about; whether this method would actually beat the "big O" complexity class of the "random" path choice that 209 suggested depends on how much algorithmic complexity is required to analyze previous-path geometry.
Nimur (talk) 14:48, 29 September 2016 (UTC)[reply]
Another way to frame this argument is converting it to set theory. In our scenario, we have a set of paths and only one is the solution (we have no idea which one). We have a very simple O(1) algorithm that tells us if a path is a solution - we try the path and it either works or fails. So, no matter how I sort the set of solutions, I should statistically expect to find the solution equally with each attempt. From that, it follows that I will search half the solutions on average before finding a solution. I use this approach early on in Big O because I think people handle sets or collections better than abstract solutions. (As for the "what if there is more than one solution" argument - that is impossible because there are no loops. ) 209.149.113.4 (talk) 16:15, 29 September 2016 (UTC)[reply]
I suspect that the exact method used to create the "random" maze may be quite important in figuring out the optimal solution method. I always favor a BFS search approach that starts at both ends, rather than just one. While not guaranteed to be quicker, imagine a case where an infinite 2D maze is just an open grid with no walls (but we don't know this ahead of time, so can't use it). If our starting and ending points are r=10 spaces away, then building a circle from one pt until it hits the other would fill in about pi(10)2 spaces, while two circles, each with a radius of 5, would fill in 2pi(5)2. That works out to half as many spaces. When you go to 3D maze, that goes to one quarter as many spaces. When you go to higher dimensions, it becomes even more efficient (doubling the difference with each added dimension). Also note that the puzzle may not be a maze. Imagine a chess board in the current position and initial position, where your goal is to find the steps used to get there. Starting from both ends could be much quicker there, assuming we just use a brute-force BFS. (Of course, if you have more than a few moves, a pruning method really is needed here.) StuRat (talk) 18:27, 28 September 2016 (UTC)[reply]
You can use an A* algorithm, or some other heuristically guided method. Plain depth first is only guaranteed to terminate if the maze is finite and if there are no loops (or you do bookkeeping to handle them). BFS will always find the shortest route, so it naturally handles loops and infinite mazes. --Stephan Schulz (talk) 19:01, 28 September 2016 (UTC)[reply]
The problem with using a heuristic approach to maze-solving is that it's not obvious which moves are better or worse, until the maze has been solved. If you move closer to the target, that may be a worse move, because you are moving towards a dead end. StuRat (talk) 20:11, 28 September 2016 (UTC)[reply]
If you knew which moves were better, it wouldn't be a search, heuristic or otherwise. The heuristic in A* is not arbitrary: see admissible heuristic. For a grid maze, the Manhattan distance to the goal is an admissible heuristic. A simple breadth-first search is equivalent to A* with a distance heuristic of 0, which is admissible for any search but inferior to Manhattan distance in this case. -- BenRG (talk) 21:29, 29 September 2016 (UTC)[reply]
It's not obvious that being physically closer, whether in Euclidean distance or taxicab distance, is an indication that you are on the correct path. Consider a maze which is a giant spiral, plus a few dead-end offshoots here and there, with the center of the spiral being the start and the outside being the finish. In such a maze you would need to move away from the target almost as often as you move towards it. StuRat (talk) 02:01, 1 October 2016 (UTC)[reply]

Help with MIDI files in WP

When I click "Play" on a MIDI file in WP such as this one: Play, the file doesn't play. Instead I'm asked to save it to my local hard disk. Is this inevitable? Does this happen to everyone? Or is there something I'm doing wrong? Is there something I can do to have it play in the browser as I'm reading the article? Thanks. Basemetal 19:30, 28 September 2016 (UTC)[reply]

See WP:Media help (MIDI) for details. Tevildo (talk) 19:53, 28 September 2016 (UTC)[reply]
Thanks. That help says nothing about Chrome. Does anyone know what to do about this issue in Chrome? Basemetal 20:02, 28 September 2016 (UTC)[reply]
You'll need a browser extension - however, Google isn't proving helpful about obtaining one. Does anyone else have a suggestion? Tevildo (talk) 22:04, 29 September 2016 (UTC)[reply]
Exactly. Thanks Tevildo. Is there really no Chrome users here who've solved the problem for themselves? Maybe I'll check at the Village Pump. But don't hesitate to reply if you've suddenly got a brain storm, y'all. Basemetal 15:31, 1 October 2016 (UTC)[reply]

September 29

FFT code

Where I get code for FFT?--86.187.172.8 (talk) 01:06, 29 September 2016 (UTC)[reply]

If you want a "ready to use" code, then depending on the programming language it might already have been implemented, for instance in Python there is numpy.fft. TigraanClick here to contact me 07:04, 29 September 2016 (UTC)[reply]
FFTW is a good open-source FFT library (if that's what you want). -- BenRG (talk) 08:43, 29 September 2016 (UTC)[reply]

Simple receive SMTP

On Windows command line how can I monitor SMTP 127.0.0.1 port 25 for any incoming email sent to it? It doesn't have to do anything fancy except receive the email stream and dump to output. I just want to test if the emails are being sent to it correctly without setting up a full scale email server. The simpler the better. Thanks — Preceding unsigned comment added by 125.141.200.21 (talk) 12:43, 29 September 2016 (UTC)[reply]

Python's standard library contains a simple smtpd implementation. Doug Hellmann shows an ultra-simple example of using that, with a tiny program here. If you changed the port, and added the line print 'Message  :', data (after the other prints) then you'd be done. -- Finlay McWalter··–·Talk 13:00, 29 September 2016 (UTC)[reply]
Reading that whole Hellmann article, he has exactly what you want in the DebuggingServer section of his article. -- Finlay McWalter··–·Talk 13:19, 29 September 2016 (UTC)[reply]
You need a packet sniffer like wireshark. It monitors and logs network traffic and has many options to filter data based on your needs like only log incoming traffic on port 25 etc. It is nontrivial to use but will give you what you need. manya (talk) 05:55, 30 September 2016 (UTC)[reply]
Why do you need a packet sniffer? A packet sniffer may be useful for debugging a networking issue, but if you want to test an SMTP session you need something to connect to over SMTP. --47.138.165.200 (talk) 19:33, 30 September 2016 (UTC)[reply]

October 1

Part Number for Drive Tray used by HPE D2700 Disk Enclosure

I have bought a bare HPE D2700 Disk Enclosure (AJ941A), i.e. with only 2 SAS 6Gb Controllers, 2 PSUs and 25 Drive Blanks.
I intend to populate the D2700 with SSDs of my choice, but I first need to know the Part Number for the compatible Drive Tray, as the Drive Blanks cannot be used to install Drives. Vickreman.Chettiar 10:50, 1 October 2016 (UTC)[reply]

This forum thread may be useful. Tevildo (talk) 19:34, 2 October 2016 (UTC)[reply]

Is there a keystroke to return to 100% magnification ?

...using Windows 7, and typically Google Chrome. I am aware that CTRL 0 returns to "default zoom level", but that doesn't help. I want to be able to quickly go between 100% and 200% zoom levels, but CTRL + and CTRL - go in small steps: 100%, 110%, 125%, 150%, 175%, 200%. I currently have 200% set as the default, so that means I need many keystrokes to get back to 100%. I could set 100% as the default, but then I would have many steps to get to 200%. Ideally, 200% would remain the default, so CTRL 0 would still get me directly there, and I could get to 100% in one other keystroke.

Alternatively, if there's a way I can eliminate the intermediate steps, that would make me happy, too. BTW, if you're curious, my eyesight isn't so good, so I would prefer 200% all the time, except that an amazingly high proportion of web sites don't seem to handle that setting properly. Thanks, StuRat (talk) 00:38, 1 October 2016 (UTC)[reply]

See Windows key. ⊞ Win++ will zoom to 200%, ⊞ Win+- will zoom out if the Magnifier Utility is running, and ⊞ Win+Esc will exit the utility. Tevildo (talk) 08:15, 1 October 2016 (UTC)[reply]
Thanks, but I prefer to just use the zoom in the browser, to avoid the overhead from the magnifier utility, which also lacks the option to magnify the entire screen, under Windows 7, 32 bit edition. StuRat (talk) 16:20, 1 October 2016 (UTC)[reply]
OK. There is a Chrome extension called "Shortkeys" that appears to do what you want, but I've not used it myself and this isn't an endorsement. You can also zoom in and out with Ctrl-MouseWheel, which may be quicker than the keyboard, although it may not be obvious when you're at 100% rather than 110%. Tevildo (talk) 21:05, 1 October 2016 (UTC)[reply]
Thanks. CTRL mouse wheel works, and it lists the magnification level at all steps except the default zoom level, probably because that makes the "RESET TO DEFAULT" button (listed with the magnification) moot. StuRat (talk) 13:55, 2 October 2016 (UTC)[reply]
Incidentally, you might want to talk to your optician about getting a pair of bifocals, or a separate pair of reading glasses. I wouldn't be able to operate without them. Tevildo (talk) 08:22, 1 October 2016 (UTC)[reply]
You haven't read all my recent posts, I see. I have a skin condition which precludes me from wearing glasses, jewelry, watches, or even tight clothes. Therefore, I wear a single contact lens, so I have near vision in one eye and far vision in the other. (There supposedly are bifocal contact lens, but I am skeptical about how well they might work, so haven't tried them yet.) StuRat (talk) 16:17, 1 October 2016 (UTC) [reply]

Make building from scratch every time

I'm following the instructions here[1] to build Unity, specially using the "dpkg-buildpackage -rfakeroot -uc -b" command given. The first time building it took 25 minutes, which is to be expected, since it's a big program. No biggie. But my problem is that every subsequent build is also taking 25 minutes, meaning it's starting from scratch for some reason. I'm pretty sure it's building from scratch because I see messages like "[ 44%] Building CXX object launcher/CMakeFiles/switcher.dir/StandaloneSwitcher.cpp.o" when I haven't modified any cpp files at all, so no objects file should need to be recompiled.

How do I make it so that it builds incrementally? I.e. only compile and link the portion of the problem that that has been changed since the last build.

AFAIK make does incremental builds by default, and no sane person would turn off incremental builds without a very very pressing reason, so I'm at a loss as to why this is happening in the first place. Pizza Margherita (talk) 03:32, 1 October 2016 (UTC)[reply]

That's the default behaviour. To only do an incremental build, add the '-uc' option. From the man page
 -nc    Do not clean the source tree (implies -b if nothing else has been selected among -F, -g, -G, -B, -A or -S).
LongHairedFop (talk) 14:03, 1 October 2016 (UTC)[reply]
Thanks. Is it '-uc' or '-nc'? You said '-uc' but the man page says '-nc'. Pizza Margherita (talk) 16:16, 1 October 2016 (UTC)[reply]
With the '-nc' option it's not building from scratch anymore, which is great. But the bad news is that it's not building at all now. After a successful build with "dpkg-buildpackage -rfakeroot -nc -uc", I made some changes, and ran "dpkg-buildpackage -rfakeroot -nc -uc", but here's the result:
time dpkg-buildpackage -rfakeroot -nc -uc
dpkg-buildpackage: source package unity
dpkg-buildpackage: source version 7.4.0+16.04.20160906-0ubuntu1
dpkg-buildpackage: source distribution xenial
dpkg-buildpackage: source changed by Marco Trevisan (Treviño)
dpkg-buildpackage: host architecture amd64
 dpkg-source --before-build unity-7.4.0+16.04.20160906
 debian/rules build
dh build --with translations,quilt,python2,python3,migrations --parallel
 fakeroot debian/rules binary
dh binary --with translations,quilt,python2,python3,migrations --parallel
 dpkg-genchanges  >../unity_7.4.0+16.04.20160906-0ubuntu1_amd64.changes
dpkg-genchanges: including full source code in upload
 dpkg-source --after-build unity-7.4.0+16.04.20160906
dpkg-buildpackage: full upload (original source is included)

real	0m1.275s
user	0m1.096s
sys	0m0.068s
Basically the build system thinks nothing has changed, and that there's nothing to build. Pizza Margherita (talk) 21:32, 1 October 2016 (UTC)[reply]

php without script file

So I have a php script with;

<?php
echo hash('md5', $argv[1]);
?>

that gets called from the command line as so;

php.exe -f md5.php textgoeshere

which produces;

96ad4fad21a0d24c732a00cf3450e2ae

as the output. How can I do this with php directly on the command line bypassing the need for calling a separate .php script containing the function? Cohefuku (talk) 14:34, 1 October 2016 (UTC)[reply]

php -r "echo hash('md5', $argv[1]);" textgoeshere. -- zzuuzz (talk) 14:54, 1 October 2016 (UTC)[reply]
Wow, thanks zzuuzz! You're now officially a contributor to my new bot. I hope you'll be looking forward the to little surprise I have planned for the reference desk in the next few days... hehehe! Cohefuku (talk) 15:43, 1 October 2016 (UTC)[reply]
(edit conflict)This should work (works on Linux, I think it should be okay on Windows too):
   echo "<?php echo hash ('md5', 'textgoeshere') ?>" | php
-- Finlay McWalter··–·Talk 14:57, 1 October 2016 (UTC)[reply]
You can also do it a bit more simply on Linux systems like this
echo -n textgoeshere | md5sum
This will work on Windows too if md5sum is installed (although your version of course requires php to be installed). CodeTalker (talk) 21:01, 1 October 2016 (UTC)[reply]

Has any criminal been caught through machine learning?

I know that ML can be useful for detecting fraud or tax evaders. It could also be a good tool for concentrating police patrols there were more crimes could happen.

However, was any concrete case been solved by machine learning? --Hofhof (talk) 19:01, 1 October 2016 (UTC)[reply]

What an incredibly broad question! To help formulate your question better, you might want to look up some useful jargon and start thinking about the way our legal framework does things.
Cases are never "solved" - they are decided, or remedied, or settled. And evidence is never the solution, either!
Admissible evidence means some document, testimony, or tangible item that a court allows attorneys to present during a case.
Dispositional evidence [2] means that the evidence was in itself enough to set the result of the case.
Actionable intelligence means some fact or information - legal or otherwise - that lets a law enforcement agency make forward progress, possibly leading to a warrant or an arrest.
So, a better way to phrase your question might be:
  • "Has a machine learning algorithm ever yielded admissible evidence in a court case?" Surely yes, thousands of times; but it might take a while to find great examples. If we broadly consider that much of modern forensic evidence, particulary biometric data like fingerprints, results from computer systems that may incorporate machine learning methods, then it will be easy to find examples.
  • "Has a machine learning algorithm ever yielded dispositional evidence in a court case?" Surely yes, but it might take many hours reading boring court documents to track down good examples.
  • "Has a machine learning algorithm ever yielded actionable evidence leading to an arrest?" Surely yes, but the records might not be public unless a court releases them. Once again, it may take hours to track down great examples.
Many people speculate, for example, that Palantir Technologies employs machine learning engineers to create products that are used by the FBI to investigate white-collar crime and to investigate human-trafficking. Some speculate that this technology was also used to catch Usama bin Laden, a rumor often repeated by major news outlets like Forbes. I was not able to find any data that convinces me beyond a reasonable doubt - at least, not during a cursory search of the websites of the Department of Justice and the FBI's website. (I have looked before, and never found anything conclusive, either, but as they say, the absence of evidence is not the evidence of absence...)
Here is a case study from the FBI's website: Social Network Analysis: A Systematic Approach for Investigating. "With the support of robust technology, SNA becomes reliable across time, data, analysts, and networks and quickly produces actionable results inside any operational law enforcement environment." (The jargon in that sentence basically means that the FBI believes it can solve crimes using such algorithmic analysis). In this particular case study, the algorithmic information did not lead to an arrest or conviction, but researchers believe it could have led to arrest and conviction if administrative and legal policies were different.
Nimur (talk) 01:37, 2 October 2016 (UTC)[reply]
This isn't a field I understand that well, but it probably depends what you mean by machine learning. The controversial software TrueAllele uses Markov chain Monte Carlo and the Matlab Statistics and Machine Learning Toolbox [3] [4] [5] [6]. One of its competitors STRmix is programmed in Java and doesn't use Matlab but my impression is its design is similar [7] [8] [9]. Again I don't understand these that well but I don't know if you could say they were built using machine learning, and I'm fairly sure they don't learn between runs, but potentially you could say they use machine learning within a run. (The first external links for each software is the best explaination I found for how each work from a quick look.) Both BTW have been controvesially used in court cases in various jurisdictions including the US. Nil Einne (talk) 06:47, 2 October 2016 (UTC)[reply]
One may also want to be careful about what one means by ML leading to evidence. In many cases, it may be that the ML algorithms are used to uncover something that is subsequently reviewed by humans and the human interpretation rather than the computer one is what is presented in court. Circumstances where ML results are directly presented as evidence are probably quite a bit rarer than circumstances where they assist the development of a case in an intermediate fashion. For example, I don't know exactly how modern fingerprint database searches work, but I could certainly believe ML plays a role. However, when a fingerprint match is presented to the court, it is almost certainly based on the subsequent review and interpretation of a human expert rather than simply reporting what the computer generated. In part this is because of the role of cross-examination. Computers are complex, and some ML methods are quite opaque, which can make it easier for defense lawyers to cast doubt on their accuracy. So the prosecution is likely to rely heavily on human judgment when actually presenting a case before a judge and jury even if the initial observations leading to critical conclusions were first revealed by a machine. Dragons flight (talk) 07:15, 2 October 2016 (UTC)[reply]

Steve Jobs, from literature drop-off to tech entrepreneur

How could someone like Steve Jobs, with an educational background in literature (and not even a degree), become that successful in a technical field? --Llaanngg (talk) 19:17, 1 October 2016 (UTC)[reply]

Some would say he had a stronger background in electronics. There's a lot of people who have been successful at things which have nothing to do with their formal (institutional) education. -- zzuuzz (talk) 21:13, 1 October 2016 (UTC)[reply]
The technical stuff was done by other employees (originally Steve Wozniak). Jobs was just a manager cum charismatic guru. -- BenRG (talk) 21:28, 1 October 2016 (UTC)[reply]
That leads us to the question what he was doing there. Of the real work was done by Wozniak. — Preceding unsigned comment added by Llaanngg (talkcontribs) 21:53, 1 October 2016 (UTC)[reply]
He handled funding and manufacturing and marketing and sales. There's a lot to do in a tech startup other than laying out circuit boards and writing software. -- BenRG (talk) 22:07, 1 October 2016 (UTC)[reply]
For what it's worth, here's a source: a recent interview in which Woz said "Steve Jobs played no role at all in any of my designs" and "he did not know technology". I don't think this is a revelation to anyone in the industry: everyone knew that Woz was the engineer and Jobs was the businessman. -- BenRG (talk) 22:38, 1 October 2016 (UTC)[reply]
I think you are correct about people "in the industry", but anecdotally a lot of "average people" think Jobs invented the personal computer. Shows you how much of an impact image-crafting can have. This might be where the original poster is coming from. --47.138.165.200 (talk) 01:06, 2 October 2016 (UTC)[reply]
The original question asked how he became successful - and that question can be addressed entirely without reference to his technical skill. If you want to know about Steve's life, his authoritative biography, Steve Jobs, by Walter Isaacson, is worth reading.
Nimur (talk) 01:03, 2 October 2016 (UTC)[reply]
Also note that Bill Gates similarly is not so much a tech guy as a businessman, but recognized that the computer mouse (invented by others, such as Xerox) would be world changing, and made his software and hardware mouse-compatible starting in 1983-1984. StuRat (talk) 14:02, 2 October 2016 (UTC)[reply]
Bill Gates was more programmer than businessman in the early days. See Bill Gates#Early life. -- BenRG (talk) 03:10, 3 October 2016 (UTC)[reply]
(EC)Whoa back the truck up, by all accounts Bill Gates was a very skillful programmer. You can't put Bill and Jobs in the same basket. Bill became one of the most successful business men in history, but he started as "very much a tech guy", Jobs didn't even start as a tech guy. Vespine (talk) 03:11, 3 October 2016 (UTC)[reply]

October 3

Manipulating images

Hi all--I need something easy to help me manipulate some images. Specifically, I have eight gifs I need to combine into a single jpg. My PC is nothing special and I have no special graphic software, and I am hoping to do this with the stuff I have which includes Picture Manager and standard Windows stuff. I have Image Processor but I have no idea what it is or does. Your help is much appreciated. Drmies (talk) 14:47, 3 October 2016 (UTC)[reply]

Are the GIFs animated? (assuming the're not animated) If you mean to put them together in an array in a single larger image (e.g. a row like a filmstrip), you can do that with ImageMagick's montage function. -- Finlay McWalter··–·Talk 14:54, 3 October 2016 (UTC)[reply]
  • Thanks, Finlay McWalter. (No, nothing's animated.) I'm looking at their website. There's 13 versions for Windows, and I don't know what the difference between static and dynamic means. Don't overestimate me: I don't even know what the difference is between Win64 and Win32. Wait--I have Windows 7 Enterprise, 32-bit, so that cuts out a few options. So--dynamic or static? 16 bits per pixel or 8 bits per pixel? Drmies (talk) 15:06, 3 October 2016 (UTC)[reply]
ImageMagick-7.0.3-2-Q16-x86-dll.exe -- Finlay McWalter··–·Talk 15:10, 3 October 2016 (UTC)[reply]
GNU Image Manipulation Program is very powerful, but will take a little more effort to learn. It can work with layered images and combine them using masks, transparency, or user-specified methods. Nimur (talk) 15:02, 3 October 2016 (UTC)[reply]
  • Ha I'm looking for low learning curve. I remember vaguely what layers are, from way back when, when I had some hijacked copy of PhotoShop or whatever it was called--but I know I don't need to mask, transpare, dither, wax, or wox. Drmies (talk) 15:06, 3 October 2016 (UTC)[reply]
How do you want the images combined? Do you want to make side-by-side, 2x4, 4x2, or some such composite image from several images, like a cartoon strip from individual cartoon frames? Or something more complex, with elements of one image cut and inserted into other images? 91.155.195.247 (talk) 15:41, 3 October 2016 (UTC)[reply]