Jump to content

JB0

Members
  • Posts

    13223
  • Joined

  • Last visited

Everything posted by JB0

  1. Inserting randomness is actually a highly valid approach in many circumstances. Modeling the real world being one of the more obvious. It's why microprocessors HAVE random number generators. Yup. Power and space are major obstacles. But when you look at what a supercomputer twenty years ago could do and what a laptop now can do... you start wondering how fast those barriers will come down. How would that defeat the purpose? If the problem is "generating human-like behavior" then your algorithms SHOULD do exactly that. It can't be picking the "wrong" solution if it does a dumb thing that a real human with a similar background would have done. Consider a weather forecast program. There's a lot of data that CAN'T be input into such a program because it can't be measured with the limited sensing equipment and computer capacity available. And some of it just looks like random data with no visible cause. Some of it will have known patterns, but some of it will appear as white noise to us, because we simply don't understand the system. I'm not a meteorologist, so this is admittedly speculation, but I would assume there's actually a good bit of randomization in the computerized weather models. And as paradoxical as this sounds, this can actually IMPROVE accuracy. Arguably, the human brain IS a computer running algorithms. They're just very complex ones with large quantities of input, some if it highly randomized. And, well, it's clearly NOT a classical binary Von Neuman machine. There's a lot of iffiness in the Deep Blue VS Kasparov rematch. And you grossly oversimplify how much reprogramming was done between rounds. That's also over a decade out of date. The best chess computers still have great difficulty beating the best human players. My quick and dirty research shows that most such matches end in draws. But they do it without being reprogrammed between rounds, and without forcing the opponent to come in effectively blind. I do take your point that computers play a very different style of chess than humans. I maintain that it could be coded to play more like a human if the problem was understood and the computer existed that could run the math in any reasonable amount of time. Well, my point was that there IS a deterministic process at work in the brain. It's just not one that works very well by computing standards. Ultimately, it comes down to physics. And physics is math. If the brain was understood, it could be modeled in a computer(as opposed to the more abstract neural networks that exist currently and only model neuron connections and not the whole assemblage). If it was understood well enough, it could be modeled accurately. And if that accurate model was run... you'd have an artificial brain. And legacy code can only be rewritten if you have the source code and the hardware necessary to read and write the media it's stored upon. This is a bigger problem than you might think. And you still need someone that can actually make heads or tails of that code. Obviously, we have none of that for the brain. Evolution isn't as rapid or all-powerful as you make it out. It's stuck choosing between different sets of humans, it can't just make an arbitrary "man 2.0" model and implant that on a new person. It's also more concerned with the short-term. Who's making the most babies? Which babies are surviving long enough to have their own babies? That's what evolution cares about. Besides, modern medicine has largely short-circuited evolution. Survival of the fittest no longer applies in a world of corrective surgeries and antibiotics. If someone has a defective heart, we fix it. If we can't fix it, we replace it. They live and pass those genes on. Anyways, computers do math. Given enough computer, you can solve any possible math. And like I said, the brain operates according to the laws of physics, and physics is math. Once you know the initial conditions and system constraints, you can program those in and simulate it accurately. And my point is that if you know how it works, you can simulate it. Everything in the universe is math. Some things are just more math than others. Actually, you're comparing apples and oranges. Quantum computing is a whole new hardware field. Neural networking is a SOFTWARE field. NNs run on deterministic binary math computers. Yes, some even run on ye olde IBM PC clones. Not incredibly advanced networks in comparison to organic life, but... it's PROVEN that modern computers CAN simulate small collections of neurons. It's even possible to simulate small collections of biologically-accurate neurons, to the extent that we know how they work. I'd say the issue is one of raw power, and understanding of the problem. Moreso the latter. But regarding the former... as a rough apples to oranges comparison, there are over a hundred billion neurons in an adult human brain. An Intel Core 2 Quad has some 500 million transistors, each of which is capable of doing far less than a neuron. Biology can be simulated once it is understood. Whether we'll still be running binary transistors once we understand it well enough to simulate is a less clear issue. ... Though some claim they can do it now. http://bluebrain.epfl.ch/ "With the present simulation facility, the technical feasibility to model a piece of neural tissue has been demonstrated. " Also: " In the cortex, neurons are organized into basic functional units, cylindrical volumes 0.5 mm wide by 2 mm high, each containing about 10,000 neurons that are connected in an intricate but consistent way. These units operate much like microcircuits in a computer. This microcircuit, known as the neocortical column (NCC), is repeated millions of times across the cortex. ... This structure lends itself to a systematic modeling approach." And the reality check is here: "Our Blue Gene is only just enough to launch this project. It is enough to simulate about 50'000 fully complex neurons close to real-time. Much more power will be needed to go beyond this. We can also simulate about 100 million simple neurons with the current power. In short, the computing power and not the neurophysiological data is the limiting factor." On one of the fastest supercomputers in the world, they can simulate 50,000 neurons out of 100,000,000,000. And here's the largely unrelated, but incredibly fascinating part... "Will consciousness emerge? We really do not know. If consciousness arises because of some critical mass of interactions, then it may be possible. But we really do not understand what consciousness actually is, so it is difficult to say." If they get enough virtual neurons firing, and the computer becomes conscious, it'll be just incredibly amazing(though obviously they need a major upgrade to their Blue Gene to even try). Either way, it's just incredible. That actually wasn't meant to be a RUDE "wrong on all counts." And may the joy of Hanukwanzmasgiving be visited upon you as well.
  2. While I agree with the sentiment(and extend it to cross-console games that never see a general-purpose computer), I have to dispute several of your points. Since when is "because that's how it's always been done" a good reason to keep doing it? The easy way is rarely the BEST way. I HATE the post-Doom quick-save attitude. It inspired a LOT of bad level design and cheap deaths because "you can just reload anyways" and ultimately made death meaningless. If you want TRADITIONAL, I can point to a wide swath of computer games from over a decade of PC gaming that only allowed between-level saves. But I always thought the best thing about PC gaming was that it wasn't BOUND by tradition. Used to be that most developers wouldn't let copying someone else's bullet list get in the way of making the game THEY wanted. See, I think this, more than anything, is what's HURTING PC gaming. That desire to make games that bog down the latest and greatest hardware, then immediately introducing new hardware designed to run those latest and greatest games. In the old days, when PC gaming was strong, games were made to fit the hardware. Hardware wasn't made to fit the games. If your game couldn't run on the current hardware, you scaled the game back. You didn't tell everyone to buy a new video card or three. There should not BE a use for three graphics accelerators in one machine. I guess what I'm saying is... I vastly prefer the PC game landscape as it existed in the 80s and 90s. You know, when it was healthy. I can find more of value in a single bajillion shareware games disk from the early 90s than an entire software aisle today. Especially the 80s. I think the final dominance of the IBM clone running Windows has been bad for PCs as a whole. But that's really another topic entirely. True, but... keyboard+mouse is also probably the single worst standard controller ever. I will stand by this stance until the day I die(I also don't even own a mouse. Trackball 4 lyfe!). Aaaactually... The first model of Sega Genesis is slower than all later revisions. Sega used a knockoff of the 68000 for the first model that performed far worse than the official Motorola parts. And the TurboGrafX had a steady stream of RAM upgrades for the CD unit. But really, a stable target is a GOOD thing. Aside from letting developers know what they're working with and make the most of it... Can you imagine trying to explain why SOME Super Nintendos or PlayStations can run later games and others can't? When you're ready for a hardware upgrade, you release a whole new system. Upgrade the entire thing in one shot. And console gamers buy a new system every few years, often for the price of a single PC graphics card. What's your point? If it's backwards-compatibility, I suggest you dig a few games from 2005 out and try them in your new machine. I've found that a lot of them choke and die... or just glitch in REALLLLLLLLLLY interesting ways. This console generation is actually lasting unusually long, likely due in large part to MS and Sony both over-reaching with their initial hardware designs(as seen in their current systems' unusually high launch prices). Nintendo, though... I can't explain why there's no refresh there. I guess they just don't have to care until the Wii stops laying golden eggs and starts laying silver ones. And it's a situation the PC games market has brought upon itself. Spiraling hardware costs and incredibly short life cycles, combined with a lack of diversity and generally shoddy development* have, if not killed outright, seriously crippled a once-vital market. *I have great technical respect for Doom 3, despite being pretty un-fun to me. Largely because there was actual effort spent optimizing the game, and as a result it looked pretty good on several generations of hardware instead of looking best on the latest and greatest and varying levels of crappy on anything older. So yeah. I loved PC gaming once, and I hate what it has become all the more for it.
  3. It's a lovely collection of large words, but... it's also a complete load of crap? 1. You can write fairly complex algorithms with a wide variety of input variables and output solutions. And you could always add a random input to churn the waters a bit. Or throw in a few bugs, because no code much more complex than PRINT "HELLO WORLD" is perfect. And making routines that choose blatantly wrong solutions to a complex situation is actually rather easy, much to the chagrin of video game designers the world over. Just because it's computerized doesn't mean it's inherently BETTER at making choices. 2. There's also a strong possibility that many of the dumb, irrational choices we make are driven by ancient caveman(or even PRE-caveman) thought patterns that simply aren't applicable to the modern world, but are still a large part of how the brain expects things to work. Effectively analogous to glitchy legacy code in a computer environment. It's old, outdated, and kludgy, but it still works... most of the time. 3. It wouldn't be at all like a dog or cat, because dogs and cats have organic brains that process information in similar manners to our own, albeit with different hard-wired priorities and much less horsepower. 4. Neural networks are designed to create organic-style "thinking" from computers, and they're rather good at it. They're also INCREDIBLY inefficient, so they take LOTS of power and space. But those are cheap these days, and getting cheaper almost by the minute. 5. GitS interfaced organic brains to technology. It didn't replace brains WITH technology. There's certainly problems with that(and Masamune Shirow actually calls some of them out himself in author's notes), but it completely sidesteps the issues with simulating the human brain(which isn't a system we understand well enough to simulate anyways). It also means going "total cyborg" isn't a cure for Alzheimer's or Parkinson's in the GitS world. So... ummm... you're pretty much wrong on all counts? Bring on the robo-bodies!
  4. A little late, but... Apollo 17 launched on December 7, 1972. The lander module Challenger touched down on the moon the 11th, left and returned to the orbiting America on the 14th, and America splashed down onm the 19th. Eugene Cernan became the last man to walk on the moon. Him, Harrison Schmitt, and command module pilot Ronald Evans were the last men to ever leave Earth orbit. Schmitt remains the only scientist to have ever left. In 4 years we put twelve men on the moon. We've not returned once since then. It's been thirty-eight years since man last left the Earth. No, I don't consider mucking about in low Earth orbit to be "leaving" the Earth. You're still in the Earth's atmosphere. The International Space Station is the most common destination these days, and that's well inside the thermosphere. The soon-to-be-retired space shuttle can only BARELY reach the exosphere. To this day the moon remains the only frontier we've ever reached, then packed our bags and went home. Not space, as calling the moon "space" is a lot the same as calling some of the dust in your floorboard a car. We've never even scratched the surface of THAT frontier. But on a brighter note, as of December 13, Voyager 1 is no longer seeing an outward motion of solar wind. It is on the threshold of interstellar space, and expected to cross that border in the next 4 years.
  5. Hell yeah. I wanna see some decent replacement parts come out before my OEM parts start dying!
  6. You can't tell me with a straight face that you don't want to smear poop all over a nice big Robotech logo.
  7. Hey, my absolute world domination wish is BASICALLY the same thing!
  8. Ewww, ear buds. I guarantee 10-buck ear buds DON'T play music just as well as 800-dollar reference gear. At least you aren't using the iPod's pack-in buds. Could always be worse. Or, of course, I could just envy your less discerning ears. It certainly makes life less expensive. As far as the HDMI goes... that's a pretty variable thing, and you could write a book about it. Depending on how long the run is and how much data you're pushing(a 720p signal can work through a wore cable than a 1080p one because it's a far lower bitrate), a crappy cable can be just as good(due to digital showing no damage until bits degrade beyond legibility) or horrible(due to the bits degrading beyond legibility). Of course, HDMI cables are also one of the most absurdly over-markup products on store shelves today, and a lot of places get 30 and 40 bucks for the same damn 5-dollar cable. Soooo... get the cheap cable anyways. If it works, and there's no sparklies or dropped frames or desynced audio, then that rocks! In an analog environment, the cheapest cables WILL show degradation. Almost invariably. If you were pushing HD video signals over component, you'd probably notice a huge difference between the five-buck cables and the 20-buck cables. There's a lot less markup on the classic RCA-connector cables in general, so it's much more a case of "you get what you pay for," barring a few notable exceptions. But with a digital feed that's not hitting the limits of the cable... what model the TV is and how well it's calibrated means a lot more.
  9. Yeah. Depending on gear and situation, cable quality can be more or less of an issue. It's not just down to noise reduction, though. Lower-quality cables have less overall bandwidth, so... if you try to send too complex a signal through, it gets garbled. Amusingly, it's actually more obvious in the digital realm, because rather than "smearing", you get clipping and dropouts. And HDMI was never intended for long runs, so it's very vulnerable to degradation. Analog is pretty durable in that respect. It degrades gracefully. And Monster generally makes overpriced products with greatly exaggerated benefits, even ignoring their incredibly unethical business practices and ridiculous hair-trigger lawsuits. My personal favorite example was a PS2 controller extension cable they used to make that they claimed would actually get signals from your controller to the PS2 faster so your game would be more responsive... which is completely and utterly impossible on so many levels that I'm not really sure where to start. They're one of those companies that spends a lot more on advertising (and lawyers) than they do any actual product development, and have a reputation that's bought, not earned. And this is a hot-button for me, so let me drag out my soapbox and climb up for a minute or three. I find the difference between 128kbit/sec MP3s and even just 192 kb/s MP3s is audible with almost everything. Maybe not through pack-in earbuds(I don't use them for comfort reasons as much as audio quality), but... even a pair of junk PC speakers will fail to mask how awful 128 kb/s is(I DO use junk PC speakers for my computer). 128 kb/s needs to just die. It was chosen as a tradeoff between size and quality when dialup was the only game in town, and the concerns that prompted the tradeoff to land where it did are long gone. There's simply no good reason for it to still exist. I'm not saying everyone needs to start using FLAC exclusively or, god forbid, raw wave, but... Variable bitrate, average 192 kb/s should be the new minimum standard. It sounds MUCH nicer, you aren't wasting bits by encoding dead silence at 192 kb/sec, and it's not like VBR is rare and unsupported at this point. VBR actually makes a LOT of sense, since the low-complexity parts of the track use a lower bitrate than the more complex parts. So a burst of dead silence can be near-0 kb/s, a full-orchestra crash can surge up to 320, and the file is about the size of a 192 kb/s continuous bitrate file. Now, I DO have some tracks where 128 doesn't really hurt them that much. The complexity of the audio WILL affect how badly it gets mangled. But most of the stuff... it shows(or sounds, I should say). And the more complex the music, the worse it will show. If it's a piano solo, that lone piano gets the full 128 kb/s, but if it's part of a full orchestra, it's sort of like it's divided among all the instruments, and the quality plummets. In the interests of full disclosure, I go for lossless files on my PC, and convert to high-quality VBR for use on my MP3 player(average bitrates landing between 192 and 256 kb/s). I'd recommend getting some nice cheap headphones. Used to use a set of Koss KSC-75s. They're pretty well-known for hitting a cheap-but-good window. Of course, my current headphones are pretty much junk, and while they don't sound particularly GOOD, I can still tell the difference between a low-bitrate and high-bitrate file through them. Definitely. But mid-range gear will benefit if you replace those flimsy cables that came with your DVD player with something a little better. I'm not saying go all-out and get some MONSTER Electroblaster Cables with Superconductive Toroidal Insulation and Patented Turbine-Cut Connectors for Maximal Look-Cool Factor and Optimum Wallet Drainage, in fact I am very much NOT saying that, but... something thicker than a sheet of paper is nice.
  10. What, you mean you didn't collect Wire Transport Flywheel media too? Yeah, I collect a little bit of info about everything, it seems. Nice to find a use for it sometimes.
  11. To be fair, records DO have more range than CDs, from a purely technical standpoint. Extracting that extra range without the physical issues of reading the record causing artifacts is another story. And at the end of the day, most of the difference boils down to the different masters used. Lot of CDs are mastered horribly incompetently. Tubes, though... that's indefensible. As far as old hardware VS new hardware... Audio doesn't really change a lot. A good piece of gear from 1987 is still a good piece of gear, as long as it's in good shape. If your CD/DVD/WTF player has good DACs and you aren't using shitty cables, there's not really an advantage to using a digital run instead of an analog one. And to the consternation of many and defiance of logic... analog video works BETTER over long runs than HDMI(because it's not a particularly GOOD digital video standard).
  12. Any theory that ends with "I blame Disney" is okay in my book. They have a lot to answer for.
  13. Yes, but look at it from their viewpoint. A refreshed Macross release would take manpower, shelf space, and eyeballs that could better be spent on the the latest installment/whoring out/slanderous abuse of the Gundam cash cow. Moo. Not MUCH manpower, shelf space, and eyeball time, but... Bandai is not exactly a neutral party here. They know what side their bread is buttered on. There's logic here, it's just not very consumer-friendly logic.
  14. Really?! I may get him on that virtue alone. Had a whole slew of Generations stuff hit the shelf this week. Everything but Megatron and Blurr, I think. When you DO find Thunderwing... his two missile launchers can peg together into a single larger gun. And he has JUST BARELY ENOUGH articulation to hold it with a peg in each hand, which makes for some fairly impressive firepower on a bot that size(but still less impressive than Classics Bumblebee/Cliffjumper with Powermaster Prime's guns...). Sadly, the guns will NOT peg into the "detachable recon drone", which is what I thought those tabs on top of the guns were for initially. Spacing's all wrong. But to be fair, the guns are larger than the drone anyways. The wings can be folded in on teh robot so he looks less super robot-y and more Transformer-y. I'm tempted to tie the wing sweep to the guns, so they're only folded out when he's got the big two-handed gun. Sort of a "super mode" effect. They can also be flipped OUT in jet mode, giving him a forward-swept wing look, albeit not a very good one. But then, he's not a very good jet... though at least his face and upper legs are hidden, and his lower legs look like they belong there, which is better than SOME jets. In exchange, theres' a big hole in the chestplate where the part that hides the face folds up from. And there's just no hiding the arms, though twisting the fists around 90 degrees helps.
  15. The best ellipsis use is in Super Robot Wars Alpha, when Heero and Rei have a conversation consisting ENTIRELY of ...s. Ah, yes. How DID we forget Sinistar, the greatest philosopher of our age? http://onastick.net/drew/sinistar/ Though for raw quotability, I prefer Berzerk. "Chicken, fight like a robot!"
  16. Send 'em all to hell! YEAAAAAAAAAAAAAAAAAAAAHHHHHHHHHHHHHHHHHHHHHHHH!!!!! EDF! EDF! EDF!
  17. And then the Earth will send two prototype Mosquito fighters with adaptive weapons systems in and crush the rebellion! (Go play Mars Matrix.) Seriously, though... we HAVE to leave Earth if we intend to survive. Global extinctions aren't really an uncommon event, and I'd like to think we'd scattered before the next one hit Earth. We don't need the zentradi to wipe things out. We ALREADY live in a cosmic shooting gallery. And the Earth's had some pretty radical climate shifts in it's history even WITHOUT the help of giant space rocks. Heck, homo sapiens came into existence DURING one. And of course, ultra-long term, the sun has a finite lifespan. Assuming that our descendants are still around in 4 billion years. Of course, we will ignore the long-term in favor of the shortest-term we can think of until the day we see a definite doom. At which point, it'll be too late.
  18. I thought it was a pretty fun movie. If it helps, pretend those AREN'T the 80s cartoon Turtles. They're from a DIFFERENT, but closely related universe. And I make no claims as to whether the originals were that goofy or not. I've actively avoided seeing any of the old Turtles cartoons again. Rewatching my childhood favorites always ends in pain. And me wondering how I could ever have been THAT stupid.
  19. Touche. They'd definitely want a way to patch up injured commanders and archivists. And possibly cybernetic upgrades(if nothing else, Bodol's ship-body in DYRL is one. I admit to being curious if Britai's eyeplate was intended as a replacement for an eye injured in a previous campaign or an "upgrade" of some sort.). Of course, if what we've seen is any indication, the troops on the front lines aren't likely to come back in need of medical treatment because their mechs tend to erupt into fireballs around them. But that could easily be due to the camera pointed at the most exciting things at any given moment, similar to the VF-1 and destroid combustibility.
  20. Does it help if the flippers are an extra feature, and not part of the transformation proper?
  21. 4 words: Seaspray has scuba flippers.
  22. I would actually assume the zentradi DON'T have much in the way of medical technology. Why bother, when you can pop the top on a clone chamber and send a fresh troop to the front immediately instead of waiting for the injured guy to heal?
  23. Or since they'd never seen any physical remains or pictures of the former crew, they figured they were fighting a giant version of the Predator, Creature from the Black Lagoon, or something similarly not-quite human. Not that I DOUBT there was misinformation to the troops, but it was implied Roy was "in the loop." Heck, even if they HAD found physical remains, they probably would've assumed they looked more like klingons or romulans than humans.
  24. That's actually an interesting point. You WOULD think that any evidence of mixed-size crew would be noteworthy enough to get mention. The implication DOES seem to be that it was a macro-zentradi-only vessel(and in DYRL, the ASS-1 was a meltrandi vessel in that version of events, making it explicitly macro-only). The question becomes... is this unique to some SA vessels(making the ASS-1 an odd duck), or are all SA ships macro-only or miclone-only? Though they should still be able to extrapolate the general shape of the giants from the giant-sized equipment they used. Size matters not. Look at the zentradi restroom Max does his quick-change in. It's alien and teched up, but more or less recognizable as a restroom(or was intended to be, anyways. Max can tell as soon as he sees the space toilet.) If the zentradi were, say, centaurs instead of human... their restrooms would look a LOT different. And there wouldn't BE chairs, not as we know them anyways. Hallways would probably be a lot wider(need more space to turn around if you have a horse's backside). Airlocks would be deeper. Et cetera. I'd ask why they didn't find anything that would IMMEDIATELY make it clear how human they were(say, images in the crew database or 5-fingered spacesuits), but the ship was sent off as a booby trap. It was probably emptied of any gear that was easily removable, the database purged, the crew evacuated, and the autopilot programmed.
  25. They knew the size. Probably even that the aliens were humanoid. But I think the implication is that NO ONE knew they were 30-foot humans.
×
×
  • Create New...