There is No Ghost in the Shell: GITS Movie Analysis

tmonster

Superstar
Joined
Nov 26, 2013
Messages
17,900
Reputation
3,205
Daps
31,790
“There is a face beneath this mask, but it isn't me. I'm no more that face than I am the muscles beneath it, or the bones beneath that.”
V for Vendetta
 

tmonster

Superstar
Joined
Nov 26, 2013
Messages
17,900
Reputation
3,205
Daps
31,790


it is getting beat up by the critics and it has flaws but I enjoyed this movie
 

tmonster

Superstar
Joined
Nov 26, 2013
Messages
17,900
Reputation
3,205
Daps
31,790
xNKsCZP.gif

doUJFbF.gif

9lH1OxC.gif

dBvIV0r.gif

fNQr02F.gif

rPw4L2r.gif

TFUWEyE.gif

5ruGOmw.gif
 

tmonster

Superstar
Joined
Nov 26, 2013
Messages
17,900
Reputation
3,205
Daps
31,790
Coming soon: A brain implant to restore memory
by Kerry Sheridan
brainimplant.jpg

The implantable BrainGate neural interface can detect and record brain signals, allowing persons who have lost the use of arms and legs to have point-and-click control of a computer. A BrainGate device has remained functional for 2.7 years …more
In the next few months, highly secretive US military researchers say they will unveil new advances toward developing a brain implant that could one day restore a wounded soldier's memory.


The Defense Advanced Research Projects Agency (DARPA) is forging ahead with a four-year plan to build a sophisticated memory stimulator, as part of President Barack Obama's $100 million initiative to better understand the human brain.

The science has never been done before, and raises ethical questions about whether the human mind should be manipulated in the name of staving off war injuries or managing the aging brain.

Some say those who could benefit include the five million Americans with Alzheimer's disease and the nearly 300,000 US military men and women who have sustained traumatic brain injuries in Iraq and Afghanistan.

"If you have been injured in the line of duty and you can't remember your family, we want to be able to restore those kinds of functions," DARPA program manager Justin Sanchez said this week at a conference in the US capital convened by the Center for Brain Health at the University of Texas.

"We think that we can develop neuroprosthetic devices that can directly interface with the hippocampus, and can restore the first type of memories we are looking at, the declarative memories," he said.

Declarative memories are recollections of people, events, facts and figures, and no research has ever shown they can be put back once they are lost.

Early days

What researchers have been able to do so far is help reduce tremors in people with Parkinson's disease, cut back on seizures among epileptics and even boost memory in some Alzheimer's patients through a process called deep brain stimulation.

Those devices were inspired by cardiac pacemakers, and pulse electricity into the brain much like a steady drum beat, but they don't work for everyone.

Experts say a much more nuanced approach is needed when it comes to restoring memory.

"Memory is patterns and connections," explained Robert Hampson, an associate professor at Wake Forest University.

"For us to come up with a memory prosthetic, we would actually have to have something that delivers specific patterns," said Hampson, adding that he could not comment specifically on DARPA's plans.


Hampson's research on rodents and monkeys has shown that neurons in the hippocampus—the part of the brain that processes memory—fire differently when they see red or blue, or a picture of a face versus a type of food.

Equipped with this knowledge, Hampson and colleagues have been able to extend the animals' short-term, working memory using brain prosthetics to stimulate the hippocampus.

They could coax a drugged monkey into performing closer to normal at a memory task, and confuse it by manipulating the signal so that it would choose the opposite image of what it remembered.

According to Hampson, to restore a human's specific memory, scientists would have to know the precise pattern for that memory.

Instead, scientists in the field think they could improve a person's memory by simply helping the brain work more like it used to before the injury.

"The idea is to restore a function back to normal or near normal of the memory processing areas of the brain so that the person can access their formed memories, and so that they can form new memories as needed," Hampson said.

Ethical concerns

It's easy to see how manipulating memories in people could open up an ethical minefield, said Arthur Caplan, a medical ethicist at New York University's Langone Medical Center.

"When you fool around with the brain you are fooling around with personal identity," said Caplan, who advises DARPA on matters of synthetic biology but not neuroscience.

"The cost of altering the mind is you risk losing sense of self, and that is a new kind of risk we never faced."

When it comes to soldiers, the potential for erasing memories or inserting new ones could interfere with combat techniques, make warriors more violent and less conscientious, or even thwart investigations into war crimes, he said.

"If I could take a pill or put a helmet on and have some memories wiped out, maybe I don't have to live with the consequences of what I do," Caplan said.

DARPA's website says that because its "programs push the leading edge of science," the agency "periodically convenes scholars with expertise in these issues to discuss relevant ethical, legal, and social issues."

Just who might be first in line for the experiments is another of the many unknowns.

Sanchez said the path forward will be formally announced in the next few months.

"We have got some of the most talented scientists in our country that will be working on this project. So stay tuned. Lots of exciting things will be coming in the very near future."
 

tmonster

Superstar
Joined
Nov 26, 2013
Messages
17,900
Reputation
3,205
Daps
31,790
"A new technique pioneered by scientists working on project NEUWalk at the Swiss Federal Institute for Technology (EPFL) have figured out a way to reactivate the severed spinal cords of fully paralyzed rats, allowing them to walk again via remote control. Human trials are scheduled for next summer. "We have complete control of the rat's hind legs," EPFL neuroscientist Grégoire Courtine said. "The rat has no voluntary control of its limbs, but the severed spinal cord can be reactivated and stimulated to perform natural walking. We can control in real-time how the rat moves forward and how high it lifts its legs."
 

Poitier

My Words Law
Supporter
Joined
Jul 30, 2013
Messages
69,411
Reputation
15,459
Daps
246,393
Your mind will not be uploaded
The recent movie “Transcendence” will not be troubling the sci-fi canon of classics, if the reviews are anything to go by. But its central plot device – “uploading” a human consciousness to a computer – remains both a central aspiration of transhumanists, and a source of queasy fascination to the rest of us. The idea is that someone’s mind is simply a computer programme, that in the future could be run on a much more powerful computer than a brain, just as one might run an old arcade game on a modern PC in emulation mode. “Mind uploading” has a clear appeal for people who wish to escape the constraints of our flesh and blood existence, notably the constraint of our inevitable mortality.

In this post I want to consider two questions about mind uploading, from my perspective as a scientist. I’m going to use as an operational definition of “uploading a mind” the requirement that we can carry out a computer simulation of the activity of the brain in question that is indistinguishable in its outputs from the brain itself. For this, we would need to be able to determine the state of an individual’s brain to sufficient accuracy that it would be possible to run a simulation that accurately predicted the future behaviour of that individual and would convince an external observer that it faithfully captured the individual’s identity. I’m entirely aware that this operational definition already glosses over some deep conceptual questions, but it’s a good concrete starting point. My first question is whether it will be possible to upload the mind of anyone reading this now. My answer to this is no, with a high degree of probability, given what we know now about how the brain works, what we can do now technologically, and what technological advances are likely in our lifetimes. My second question is whether it will ever be possible to upload a mind, or whether there is some point of principle that will always make this impossible. I’m obviously much less certain about this, but I remain sceptical.

This will be a long post, going into some technical detail. To summarise my argument, I start by asking whether or when it will be possible to map out the “wiring diagram” of an individual’s brain – the map of all the connections between its 100 billion or so neurons. We’ll probably be able to achieve this mapping in the coming decades, but only for a dead and sectioned brain; the challenges for mapping out a living brain at sub-micron scales look very hard. Then we’ll ask some fundamental questions about what it means to simulate a brain. Simulating brains at the levels of neurons and synapses requires the input of phenomenological equations, whose parameters vary across the components of the brain and change with time, and are inaccessible to in-vivo experiment. Unlike artificial computers, there is no clean digital abstraction layer in the brain; given the biological history of nervous systems as evolved, rather than designed, systems, there’s no reason to expect one. The fundamental unit of biological information processing is the molecule, rather than any higher level structure like a neuron or a synapse; molecular level information processing evolved very early in the history of life. Living organisms sense their environment, they react to what they are sensing by changing the way they behave, and if they are able to, by changing the environment too. This kind of information processing, unsurprisingly, remains central to all organisms, humans included, and this means that a true simulation of the brain would need to be carried out at the molecular scale, rather than the cellular scale. The scale of the necessary simulation is out of reach of any currently foreseeable advance in computing power. Finally I will conclude with some much more speculative thoughts about the central role of randomness in biological information processing. I’ll ask where this randomness comes from, finding an ultimate origin in quantum mechanical fluctuations, and speculate about what in-principle implications that might have on the simulation of consciousness.

Why would people think mind uploading will be possible in our lifetimes, given the scientific implausibility of this suggestion? I ascribe this to a combination of over-literal interpretation of some prevalent metaphors about the brain, over-optimistic projections of the speed of technological advance, a lack of clear thinking about the difference between evolved and designed systems, and above all wishful thinking arising from people’s obvious aversion to death and oblivion.

On science and metaphors

I need to make a couple of preliminary comments to begin with. First, while I’m sure there’s a great deal more biology to learn about how the brain works, I don’t see yet that there’s any cause to suppose we need fundamentally new physics to understand it. Of course, new discoveries may change everything, but it seems to me that the physics we’ve got is quite complicated enough, and this discussion will be couched entirely in currently known, fundamentally physicalist, principles.

The second point is that, to get anywhere in this discussion, we’re going to need to immunise ourselves against the way in which almost all popular discussion of neuroscience is carried out in metaphorical language. Metaphors used clearly and well are powerful aids to understanding, but when we take them too literally they can be badly misleading. It’s an interesting historical reflection that when computers were new and unfamiliar, the metaphorical traffic led from biological brains to electronic computers. Since computers were popularly described as “electronic brains”, it’s not surprising that biological metaphors like “memory” were quickly naturalised in the way computers were described. But now the metaphors go the other way, and we think about the brain as if it were a computer (I think the brain is a computer, by the way, but it’s a computer that’s so different to man-made ones, so plastic and mutable, so much immersed in and responsive to its environment, that comparisons with the computers we know about are bound to be misleading). So if what we are discussing is how easy or possible it will be to emulate the brain with a man-made computer, the fact that we are so accustomed to metaphorical descriptions of brains in terms of man-made computers will naturally bias us to positive answers. It’s too easy to move from saying a neuron is analogous to a simple combination of logic gates in a computer, say, to thinking that it can be replaced by one. A further problem is that many of these metaphors are now so stale and worn out that they have lost all force, and the substance of the original comparison has been forgotten. We often hear, for example, the assertion that some characteristic or other is “hard-wired” in the brain, but if one stops to think what an animal’s brain looks and feels like there’s nothing much hard about it. It’s a soft machine.

Mapping the brain’s “wiring diagram”

One metaphor that is important is the idea that the brain has a “wiring diagram”. The human brain has about 100 billion neurons, each of which is connected to many others by thin fibres – the axons and dendrites – along which electrical signals pass. There’s about 100,000 miles of axon in a brain, connecting at between a hundred to a thousand trillion synaptic connections. It’s this pattern of connectivity between the neurons through the axons and dendrites that constitutes the “wiring diagram” of the brain. I’ll argue below that knowing this “wiring diagram” is not yet a sufficient condition for simulating the operation of a brain – it must surely, however, be a necessary one.

So far, scientists have successfully mapped out the “wiring diagram” of one organism’s nervous system – the microscopic worm C. elegans, which has a total of 300 neurons. This achievement was itself a technical tour-de-force, which illustrates what would need to be done to determine the immeasurably more complex “wiring diagram” of the human brain. The issue is that these fibres are thin (hundreds of nanometers, for the thinnest of them), very densely packed, and the fibres from a single neuron can pervade a very large volume (this review in Science – The Big and the Small: Challenges of Imaging the Brain’s Circuits ($) – is an excellent up-to-date overview of what’s possible now and what the challenges are). Currently electron microscopy is required to resolve the finest connections, and this can only be done on thin sections. Although new high resolution imaging techniques may well be developed, it’s difficult to see how this requirement to image section by section will go away. Magnetic resonance imaging, on the other hand, can image an intact brain, but at much lower resolution – more like millimetres than nanometers. The resolution of MRI derives from the strength of the magnetic field gradient you can sustain. You can have a large gradient over a small volume but if you’re constrained to keep the brain intact that provides quite a hard limit.

Proponents of mind uploading who recognise these difficulties at this point resort to the idea of nanobots crawling through the brain, reading it from the inside. I’ve discussed at length why I think it will be very much more difficult than people think to create such nanobots, for example in my article Rupturing the Nanotech Rapture, and in Nanobots, nanomedicine, Kurzweil, Freitas and Merkle I discuss why I don’t think the counter-arguments of their proponents are convincing.

Mapping out all the neural connections of a human brain, then, will be difficult. It probably will be done, on a timescale perhaps of decades. The big but, though, is that this mapping will be destructive, and the brain it is done on will be definitively dead before the process starts. And massive job though it will be to map out this “micro-scale connectome”, there’s something very important it doesn’t tell you – the difference between a live brain and a dead lump of meat – that is what the initial electrical state of the brain is, where the ion gradients are, what the molecules are doing. But more on molecules later…

Modelling, simulation, emulation: why mind uploading might make sense if you believed in intelligent design

If you did have a map of all the neural connections of a human brain, dead or alive, is that enough to simulate it? You could combine the map with known equations for the propagation of electrical signals along axons (the Hodgkin-Huxley equations), models of neurons and models for the behaviour of synapses. This is the level of simulation, for example, carried out in the “Blue Brain” project (see this review (PDF) for a semi-technical overview). This is a very interesting thing to do from the point of neuroscience, but it is not a simulation of a human brain, and certainly not of any individual’s brain. It’s a model, which aggregates phenomenological descriptions of the collective behaviours and interactions of components like the many varieties of voltage gated ion channels and the synaptic vesicles. The equations you’d use to model an individual synapse, for example, would have different parameters for different synapses, and these parameters change with time (and in response to the information being processed). Without an understanding of what’s going on in the neuron at the molecular level, these are parameters you would need to measure experimentally for each synapse.

An analogy might make this clearer. Let me ask this question: is it possible to simulate the CPU in your mobile phone? At first sight this seems a stupid question – of course one can predict with a very high degree of certainty what the outputs of the CPU would be for any given set of inputs. After all, the engineers at ARM will have done just such simulations before any of the designs had even been manufactured, using well-understood and reliable design software. But a sceptical physicist might point out that every CPU is different at the atomic level, due to the inherent finite tolerances of manufacturing, and in any case the scale of the system is much too large to be able to simulate at the quantum mechanical level that would be needed to capture the electronic characteristics of the device.

In this case, of course, the engineers are right, for all practical purposes. This is because the phenomenology that predicts the behaviour of individual circuit elements is well-understood in terms of the physics, and the way these elements behave is simple, reliable and robust – robust in the sense that quite a lot of variation in the atomic configuration produces the same outcomes. We can think of the system as having three distinct levels of description. There is the detailed level of what the electrons and ions are doing, which would account for the basic electrical properties of the component semiconductors and insulators, and the junctions and interfaces between them. Then there is the behaviour of the circuit elements that are built from these materials – the current-voltage characteristics of the field effect transistors, and the way these components are built up into circuits. And finally, there is a description at a digital level, in which logical operations are implemented. Once one has designed circuit elements with clear thresholds and strongly non-linear behaviour, one can rely on there being a clean separation between the digital and physical levels. It’s this clean separation between the physical and the digital that makes the job of emulating the behaviour of one type of CPU on another one relatively uncomplicated.

But this separation between the physical and the digital in an integrated circuit isn’t an accident or something pre-ordained – it happens because we’ve designed it to be that way. For those of us who don’t accept the idea of intelligent design in biology, that’s not true for brains. There is no clean “digital abstraction layer” in a brain – why should there be, unless someone designed it that way? In a brain, for example, the digital is continually remodelling the physical – we see changes in connectivity and changes in synaptic strength as a consequence of the information being processed, changes, that as we see, are the manifestation of substantial physical changes, at the molecular level, in the neurons and synapses.
 

Poitier

My Words Law
Supporter
Joined
Jul 30, 2013
Messages
69,411
Reputation
15,459
Daps
246,393

The unit of biological information processing is the molecule


Is there any general principle that underlies biological information processing, in the brain and elsewhere, that would help us understand what ionic conduction, synaptic response, learning and so on have in common? I believe there is – underlying all these phenomena are processes of macromolecular shape change in response to a changing local environment. Ion channel proteins change shape in response to the electric field across the membrane, opening or closing pores; at the synapse shape-changing proteins respond to electrical changes to trigger the bursting open of synaptic vesicles to release the neurotransmitters, which themselves bind to protein receptors to transmit their signal, and complicated sequences of protein shape changes underlie the signalling networks that strengthen and weaken synaptic responses to make memory, remodelling the connections between neurons.

This emphasises that the fundamental unit of biological information processing is not the neuron or the synapse, it’s the molecule. Dennis Bray, in an important 1995 paper ”Protein molecules as computational elements in living cells”, pointed out that a protein molecule can act as a logic gate through the process of allostery – its catalytic activity is modified by the presence or absence of bound chemicals. In this chemical version of logic, the inputs are the presence or absence of certain small molecules, and the outputs are the molecules that the protein produces, in the presence of the right input chemicals, by catalysis. As these output chemicals can themselves be the inputs to other protein logic gates, complex computational networks linking the inputs and outputs of many different logic gates can be built up. The ultimate inputs of these circuits will be environmental cues – the presence or absence of chemicals or other environmental triggers detected by molecular sensors at the surface of the cells. The ultimate outputs can be short-term – to activate a molecular motor so that a cell swims towards a food source or away from a toxin. Or they can be long term, in activating and deactivating different genes so that the cell builds different structures for itself, or even changes the entire direction of its development.

This is how a single celled organism like an amoeba can exhibit behaviour that is in effect purposeful, that is adaptive to the clues it detects from the environment around it. All living cells process information this way. In the collective alliance of cells that makes up a multi-cellular organism like a human, all our cells have the ability to process information. The particular cells that specialise in doing information processing and long-ranged communication – the neurons – start out with the general capability for computation that all cells have, but through evolution have developed this capability to a higher degree and added to it some new tricks. The most important of these new tricks is an ability to control the flow of ions across a membrane in a way that modifies the membrane potential, allowing information to be carried over long distances by the passage of shock waves of membrane potential, and communications to be made between neurons in response to these rapid changes in membrane potential through the release of chemicals at synapses. But, as always happens in evolved systems, these are new tricks built on the old hardware and old design principles – molecules whose shape changes in response to changes in their environment, this shape change producing functional effects (such as the opening of an ion channel in response to a change in membrane potential).

The molecular basis of biological information processing emphasises the limitations of the “wiring” metaphor. Determining the location and connectivity of individual neurons, or the “connectome” as it’s begun to be called in neuroscience – is necessary, but far from sufficient condition for specifying the informational state of the brain; to do that completely requires us to know where the relevant molecules are, how many of them are present, and what state they’re in.

The brain, randomness, and quantum mechanics

The molecular basis of biological computation means that it isn’t deterministic, it’s stochastic, it’s random. This randomness isn’t an accidental add-on, it’s intrinsic to the way molecular information processing works. Any molecule in a warm, wet watery environment like the cell is constantly bombarded by its neighbouring water molecules, and this bombardment leads to the constant jiggling we call Brownian motion. But it’s exactly the same bombardment that drives the molecule to change shape when its environment changes. So if we simulate, at the molecular level, the key parts of the information processing system of the brain, like the ion channels or the synaptic vesicles, or the broader cell signalling mechanisms by which the neurons remodel themselves in response to the information they carry, we need to explicitly include that randomness.

I want to speculate here about what the implications are of this inherently random character of biological information processing. A great deal has been written about randomness, determinism and the possibility of free will, and I’m largely going to avoid these tricky issues. I will make one important point, though. It seems to me that all the agonising about whether the idea of free will is compatible with a brain that operates through deterministic physics is completely misplaced, because the brain just doesn’t operate through deterministic physics.

In a computer simulation, we’d build in the randomness by calls to a pseudo-random number generator, as we compute the noise term in the Langevin equation that would describe, for example, the internal motions of an receptor protein docking with a neurotransmitter molecule. In the real world, the question we have to answer is whether this randomness is simply a reflection of our lack of knowledge? Does it simply arise from a decision we make not to keep track of every detail of each molecular motion in a very complex systems? Or is it “real” randomness, that is intrinsic to the fundamental physics, and in particular from the quantum mechanical character of reality? I think it is real randomness, whose origins can be traced back to quantum fluctuations.

To be clear, I’m not claiming here that the brain is a quantum computer, in the sense that it exploits quantum coherence in the way suggested by Roger Penrose. It seems to me difficult to understand how sufficient coherence could be maintained in the warm and wet environment of the cell. Instead, I want to focus on the origin of the forces between atoms and molecules. Attractions between uncharged molecules arise from the van der Waals force, which is most fundamentally understood as a fluctuation force, a force that arises from the way randomly fluctuating fields are are modified by atoms and molecules. The fluctuating fields in question are the zero-point and thermal fluctuations of the electromagnetic field of the vacuum. Because the van der Waals force arises from quantum fluctuations, the force itself is fluctuating, and (see my earlier post Where the randomness comes from) these random fluctuations, of quantum origin, are sufficient to account for the randomness of the warm, wet nanoscale world.

The complexity theorist Scott Aaronson has recently written an interesting, but highly speculative essay that touches on these issues –The Ghost in the Quantum Turing Machine (PDF). Aaronson argues that there is a type of unpredictability about the universe today that arises from the quantum unknowability of the initial conditions of the universe. He evokes the quantum no-cloning principle to argue that quantum state functions that have evolved unitarily, without decoherence, from the beginning of the universe – he calls these “freebits” – have a different character of uncertainty to the normal types of randomness we deal with using probability distributions. The question then is whether the fundamental unpredictability of “freebits” could be connected to some fundamental unpredictability of the decisions made by a human mind. Aaronson suggests it could, if there were a way in which the randomness inherent in the molecular processes underlying the operation of the brain – such as the opening and closing of ion channels – could be traced back to quantum uncertainty. My own suggestion is that the origin of van der Waals forces, as a fluctuation force, in the quantum fluctuations of the vacuum electromagnetic field, offers the connection that Aaronson is looking for.

If Aaronson is correct that his “freebit” picture shows how the fundamental unknowability of the quantum initial conditions of the universe translate into a fundamental unpredictability of certain physical processes now, and I am correct in my suggestion that the origins of the van der Waals force in the quantum fluctuations of fields provide a route through which such unpredictability translates into the outcomes of physical processes in the brain, then this provides an argument for mind uploading being impossible in principle. This is a conclusion I suggest only very tentatively.

Your mind will not be uploaded: dealing with it

But there’s nothing tentative about my conclusion that if you are alive now, your mind will not be uploaded. What comforts does this leave for those fearing oblivion and the void, but reluctant to engage with the traditional consolations of religion and philosophy? Transhumanists have two cards left to play.

Cryonics offers the promise of putting your brain in a deep freeze to wait for technology to catch up with the challenges of uploading. It’s clear that a piece of biological tissue that has formed a glass at -192 C will, if kept at that temperature, remain in that state indefinitely without significant molecular rearrangements. The question is how much information is lost in the interval between clinical death and achieving that uniform low temperature, as a consequence both of the inevitable return to equilibrium once living systems fail, and of the physical effects of rapid cooling. Physiological structures may survive, but as we’ve seen, it’s at the molecular level that the fundamentals of biological information processing take place, and current procedures will undoubtedly be highly perturbing at this level. All this leaves aside, of course, the sociological questions about why a future society, even if it has succeeded in overcoming the massive technical obstacles to characterising the brain at the molecular level, would wish to expend resources in reanimating the consciousnesses of the particular individuals who now choose this method of corporeal preservation.

The second possibility that appeals to transhumanists is that we are on the verge of a revolution in radical life extension. It’s unquestionably true, of course, that improvements in public health, typical lifestyles and medical techniques have led to year-on-year increases in life expectancy, but this is driven mostly by reducing premature death. The increasingly prevalent diseases of old age – particularly neurodegenerative diseases like Alzheimer’s – seem as intractable as ever; we don’t even have a firm understanding of their causes, let alone working therapies. While substantial fractions of our older people are suffering from cruel and incurable dementias, the idea of radical life extension seems to me to be a hollow joke.

Why should I worry about what transhumanists, or any else, believes in? As I began to discuss at the end of my last post, Transhumanism has never been modern, I don’t think the consequences of transhumanist thinking are entirely benign, and I’ll expand on that in a later post. But there is a very specific concern about science policy that I would like to conclude with. Radical ideas like mind uploading are not part of the scientific mainstream, but there is a danger that they can still end up distorting scientific priorities. Popular science books, TED talks and the like flirt around such ideas and give them currency, if not credibility, adding fuel to the Economy of Promises that influences – and distorts – the way resources are allocated between different scientific fields. Scientists doing computational neuroscience don’t themselves have to claim that their work will lead to mind uploading to benefit from an environment in which such claims are entertained by people like Ray Kurzweil, with a wide readership and some technical credibility. I think computational neuroscience will lead to some fascinating new science, but you could certainly question the proportionality of the resource it will receive compared to, say, more experimental work to understand the causes of neurodegenerative diseases.

This entry was posted on Sunday, September 14th, 2014 at 10:12 am and is filed under Bio-nanotechnology, Radical nanotechnology and MNT, Social and economic aspects of nanotechnology. You can follow any responses to this entry through theRSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.
 

tmonster

Superstar
Joined
Nov 26, 2013
Messages
17,900
Reputation
3,205
Daps
31,790
Why should I worry about what transhumanists, or any else, believes in? As I began to discuss at the end of my last post, Transhumanism has never been modern, I don’t think the consequences of transhumanist thinking are entirely benign, and I’ll expand on that in a later post.

Great question Bob:mjlol:

The coolest thing about science is that it either does shyt or it does not do shyt
see...this is live, not a rehearsal
A medical first: Quadriplegic man controls arm using a chip implanted in his brain
TvNHlnh.gif
anything else is just conversation:mjlol:
 

tmonster

Superstar
Joined
Nov 26, 2013
Messages
17,900
Reputation
3,205
Daps
31,790
Les Baugh.
by GIFHD · 14 hours ago
J7rVyrc.gif

A Colorado man made history at the Johns Hopkins University Applied Physics Laboratory (APL) this summer when he became the first bilateral shoulder-level amputee to wear and simultaneously control two of the Laboratory's Modular Prosthetic Limbs. Most importantly, Baugh was in town for two weeks in June as part of an APL-funded research effort to further assess the usability of the MPL, developed over the past decade as part of theRevolutionizing Prosthetics Program. Before putting the limb system through the paces, Baugh had to undergo a surgery at Johns Hopkins Hospital known as targeted muscle reinnervation. 'It's a relatively new surgical procedure that reassigns nerves that once controlled the arm and the hand,' explained Johns Hopkins Trauma Surgeon Albert Chi, M.D. 'By reassigning existing nerves, we can make it possible for people who have had upper-arm amputations to control their prosthetic devices by merely thinking about the action they want to perform.
 
Top