Mind Control

Published by MindTech Sweden

I.B.M “Computerbrain”


I.B.M The Race to Reverse Engineer the Human Brain

IBM’s Dawn Blue Gene/P Supercomputer Models a Cat Brain… What’s Next?

Cognitive Computing Project Aims to Reverse-Engineer the Mind

Imagine a computer that can process text, video and audio in an instant, solve problems on the fly, and do it all while consuming just 10 watts of power.

It would be the ultimate computing machine if it were built with silicon instead of human nerve cells.

Compare that to current computers, which require extensive, custom programming for each application, consume hundreds of watts in power, and are still not fast enough. So it’s no surprise that some computer scientists want to go back to the drawing board and try building computers that more closely emulate nature.

“The plan is to engineer the mind by reverse-engineering the brain,”
says Dharmendra Modha, manager of the cognitive computing project at
IBM Almaden Research Center.

In what could be one of the most ambitious computing projects ever, neuroscientists, computer engineers and psychologists are coming together in a bid to create an entirely new computing architecture that can simulate the brain’s abilities for perception, interaction and cognition. All that, while being small enough to fit into a lunch box and consuming extremely small amounts of power.

The 39-year old Modha, a Mumbai, India-born computer science engineer, has helped assemble a coalition of the country’s best researchers in a collaborative project that includes five universities, including Stanford, Cornell and Columbia, in addition to IBM.

The researchers’ goal is first to simulate a human brain on a supercomputer. Then they plan to use new nano-materials to create logic gates and transistor-based equivalents of neurons and synapses, in order to build a hardware-based, brain-like system. It’s the first attempt of its kind.

In October, the group bagged a $5 million grant from Darpa — just enough to get the first phase of the project going. If successful, they say, we could have the basics of a new computing system within the next decade.

“The idea is to do software simulations and build hardware chips that would be based on what we know about how the brain and how neural circuits work,” says Christopher Kello, an associate professor at the University of California-Merced who’s involved in the project.

Computing today is based on the von Neumann architecture, a design whose building blocks –  the control unit, the arithmetic logic unit and the memory — is the stuff of Computing 101. But that architecture presents two fundamental problems: The connection between the memory and the processor can get overloaded, limiting the speed of the computer to the pace at which it can transfer data between the two. And it requires specific programs written to perform specific tasks.

In contrast, the brain distributes memory and processing functions throughout the system, learning through situations and solving problems it has never encountered before, using a complex combination of reasoning, synthesis and creativity.

“The brain works in a massively multi-threaded way,” says Charles King, an analyst with Pund-IT, a research and consulting firm. “Information is coming through all the five senses in a very nonlinear fashion and it creates logical sense out of it.”

The brain is composed of billions of interlinked neurons, or nerve cells that transmit signals. Each neuron receives input from 8,000 other neurons and sends an output to another 8,000. If the input is enough to agitate the neuron, it fires, transmitting a signal through its axon in the direction of another neuron. The junction between two neurons is called a synapse, and that’s where signals move from one neuron to another.

“The brain is the hardware,” says Modha, “and from it arises processes such as sensation, perception, action, cognition, emotion and interaction.” Of this, the most important is cognition, the seat of which is believed to reside in the cerebral cortex.

The structure of the cerebral cortex is the same in all mammals. So researchers started with a real-time simulation of a small brain, about the size of a rat’s, in which they put together simulated neurons connected through a digital network. It took 8 terabytes of memory on a 32,768-processor BlueGene/L supercomputer to make it happen.

The simulation doesn’t replicate the rat brain itself, but rather imitates just the cortex. Despite being incomplete, the simulation is enough to offer insights into the brain’s high-level computational principles, says Modha.

The human cortex has about 22 billion neurons and 220 trillion synapses, making it roughly 400 times larger than the rat scale model. A supercomputer capable of running a software simulation of the human brain doesn’t exist yet. Researchers would require at least a machine with a computational capacity of 36.8 petaflops and a memory capacity of 3.2 petabytes — a scale that supercomputer technology isn’t expected to hit for at least three years.

While waiting for the hardware to catch up, Modha is hoping some of the coalition’s partners inch forward towards their targets.

Software simulation of the human brain is just one half the solution. The other is to create a new chip design that will mimic the neuron and synaptic structure of the brain.

That’s where Kwabena Boahen, associate professor of bioengineering at Stanford University, hopes to help. Boahen, along with other Stanford professors, has been working on implementing neural architectures in silicon.

One of the main challenges to building this system in hardware, explains Boahen, is that each neuron connects to others through 8,000 synapses. It takes about 20 transistors to implement a synapse, so building the silicon equivalent of 220 trillion synapses is a tall order, indeed.

“You end up with a technology where the cost is very unfavorable,” says Boahen. “That’s why we have to use nanotech to implement synapses in a way that will make them much smaller and more cost-effective.”

Boahen and his team are trying to create a device smaller than a single transistor that can do the job of 20 transistors. “We are essentially inventing a new device,” he says.

Meanwhile, at the University of California-Merced, Kello and his team are creating a virtual environment that could train the simulated brain to experience and learn. They are using theUnreal Tournament videogame engine to help train the system. When it’s ready, it will be used to teach the neural networks how to make decisions and learn along the way.

Modha and his team say they want to create a fundamentally different approach. “What we have today is a way where you start with the objective and then figure out an algorithm to achieve it,” says Modha.

Cognitive computing is hoping to change that perspective. The researchers say they want to an algorithm that will be capable of handling most problems thrown at it.

The virtual environment should help the system learn. “Here there are no instructions,” says Kello. “What we have are basic learning principles so we need to give neural circuits a world where they can have experiences and learn from them.”

Getting there will be a long, tough road. “The materials are a big challenge,” says Kello. “The nanoscale engineering of a circuit that is programmable, extremely small and that requires extremely low power requires an enormous engineering feat.”

There are also concerns that the $5 million Darpa grant and IBM’s largess — researchers and resources–while enough to get the project started may not be sufficient to see it till end.

Then there’s the difficulty of explaining that mimicking the cerebral cortex isn’t exactly the same as recreating the brain. The cerebral cortex is associated with functions such as thought, computation and action, while other parts of the brain handle emotions, co-ordination and vital functions. These researchers haven’t even begun to address simulating those parts yet.

Also see:
Pentagon Begins Fake Cat Brain Project
IBM Joins in Pentagon Quest for Fake Cat Brains
DARPA: Fake Brains, ASAP
DARPA 2009: Brains-on-a-Chip, Transparent Displays

Written By: Surfdaddy Orca

Date Published: November 30, 2009 | View more articles .

IBM’s Dharmendra Modha has a vision. “Cognitive computing seeks to engineer the mind by reverse engineering the brain,” says Modha, a researcher at IBM’s Almaden Research Center, just south of San Francisco. “The mind arises from the brain, which is made up of billions of neurons that are linked by an Internet-like network.” Here’s a video of Dr. Modha explaining his vision of building the computer of the future by modeling the brain:

Modha’s future computer may have taken a giant leap forward with the recent announcement at the SC09 high-performance computing conference in Portland, Ore., of a joint IBM project led by Modha with researchers from five universities and the Lawrence Berkeley National Laboratory. Dubbed “Blue Matter,” a software platform for neuroscience modeling, it pulls together archived magnetic resonance imaging (MRI) scan data and assembles it on a Blue Gene/P Supercomputer. IBM has essentially simulated a brain with 1 billion neurons and 10 trillion synapses — one they claim is about the equivalent of a cat’s cortex, or 4.5% of a human brain.

See Also

The funding for the project comes from Phase 1 of the U.S. DARPA SyNAPSE project that seeks “to discover, demonstrate, and deliver algorithms of the brain via a combination of (computational) neuroscience, supercomputing, and nanotechnology.” IBM’s announcement signals significant progress towards creating Modha’s future computer, one that will simulate and emulate the brain’s abilities for sensation, perception, action, interaction and cognition, and rivaling the brain’s low power, energy consumption, and compact size.

This starts to raise interesting questions, as several bloggers noted somewhat tongue-in-cheek after the announcement. “So, will Blue Gene get the sudden urge to lick itself,” says one blogger

Will We Eventually Upload Our Minds?

Bruce Katz Interview

Written By: Surfdaddy Orca

Date Published: September 9, 2009 |
blue brain

Bruce Katz received his Ph.D. in artificial intelligence from University of Illinois. He is a frequent lecturer in artificial intelligence at the University of Sussex in the U.K and serves as adjunct professor in of Computer Engineering at Drexel University in Philadelphia. Dr. Katz is the accomplished author of Neuroengineering the Future, Digital Design, as well as many prestigious journal articles.

Katz believes we are on the cusp of a broad neuro-revolution, one that will radically reshape our views of perception, cognition, emotion and even personal identity. Neuroengineering is rapidly advancing from perceptual aids such as cochlear implants to devices that will enhance and speed up thought. Ultimately, he says, this may free the mind from its bound state in the body to a platform independent existence.

See Also

h+: What trends do you see in cognitive enhancement modalities and therapies (drugs, supplements, music, meditation, entrainment, AI and so forth)?

BRUCE KATZ: There are two primary types of cognitive enhancement — enhancement of intelligence and enhancement of creative faculties. Even though creativity is often considered a quasi-mystical process, it may surprise some that we are actually closer to enhancing this aspect of cognition than pure intelligence.

The reason is that intelligence is an unwieldy collection of processes, and creativity is more akin to a state, so it may very well be possible to produce higher levels of creative insight for a fixed level of intelligence before we are able to make people smarter in general.

There appear to be three main neurophysiological ingredients that influence the creative process These are 1) relatively low levels of cortical arousal; 2) a relatively flat associative gradient; 3) a judicious amount of noise in the cognitive system. [Editor’s note: A person with a high associative gradient is able to make a few common associations with a stimulus word such as “flight,” whereas those with a flat gradient are able to make many associations with the stimulus word. Creative people have been found to have fairly flat gradients, and uncreative people have much steeper gradients.]

All three ingredients conspire to encourage the conditions whereby cognition runs outside of its normal attractors, and produces new and potentially valuable insights.

Solving compound remote associate (CRA) problems illustrates how these factors work. In a CRA problem, the task is to find a word that is related to three items. For example, given “fountain”, “baking”, and “pop” the solution would be “soda.”

The reason CRA problems are difficult, and why creative insight helps, is that the mind tends to fixate on the stronger associates of the priming words (for example, “music” for “pop”), which in turn inhibits the desired solution.

What are the implications of this for artificially enhancing insight? First, any technique that quiets the mind is likely to have beneficial effects. These include traditional meditative techniques, but possibly also more brute-force technologies such as transcranial magnetic stimulation (TMS). Low frequency pulses (below 1Hz) enable inhibitory processes, and TMS applied in this manner to the frontal cortices could produce the desired result.

Second, the inhibition of the more literal and less associative left hemisphere through similar means could also produce good results. In fact, EEG studies of people solving CRA problems with insight have shown an increase in gamma activity (possibly indicative of conceptual binding activity) in the right but not the left hemisphere just prior to solution.

Finally, the application of noise to the brain, either non-invasively, through TMS, or eventually through direct stimulation may encourage it to be more “playful” and to escape its normal ruts.

In the not too distant future, we may not have to rely on nature to produce the one-in-a-million combination [of a high IQ and creative insight], and be able to produce it at will on many if not all neural substrates.

h+: What are some of the issues (legal, societal, ethical) that you anticipate for such technology?

BK: My own opinion is that — except in the case of minors — we must let an informed public make their own choices. Any government-mandated set of rules will be imperfect, and in any case will deviate from the needs and desires of its individual citizens.

What we in the neuroengineering community should be pushing for is a comprehensive freedom of thought initiative, ideally enshrined as a constitutional amendment rather than as a set of clumsy laws. And we should be doing so sooner rather than later, before individual technologies come online, and before we allow the “tyranny of the majority” to control a right that ought to trump all other rights.

h+: What is your vision for the future of cognitive enhancement and neurotechnology in the next 20 years?

BK: Ultimately, we want to be free of the limitations of the human brain. There are just too many inherent difficulties in its kludgy design — provided by evolution — to make it worthwhile to continue along this path.

As I describe in my book, Neuroengineering the Future, these kludges include:

  • Short-term memory limitations (typically seven plus or minus 2 items),
  • Significant long-term memory limitations (the brain can only hold about as much as a PC hard disk circa 1990),
  • Strong limitations on processing speed (although the brain is a highly parallel system, each neuron is a very slow processor),
  • Bounds on rationality (we are less than fully impartial processors, sometimes significantly so),
  • Bounds on creativity (most people go through their entire lives without making a significant creative contribution to humanity%2

IBM brain simulations exceed scale of cat’s cortex

‘Historic milestone’ on way toward simulations of human brain
By Jon Brodkin, Network World
November 18, 2009
IBM’s quest to build a computer that can mimic the human brain has reached a new milestone, with what IBM calls the first brain simulation to exceed the scale of a cat’s cortex.

The simulation involves 1 billion spiking neurons and 10 trillion individual learning synapses, and was performed on an IBM Blue Gene/P supercomputer with 147,456 processors and 144TB of main memory.

Cray blows by IBM to regain supercomputing crown

“This is a tremendous historic milestone,” says Dharmendra Modha, the lead researcher on IBM’s cognitive computing project. “It shows that if we build a supercomputer with 1 exaflop computing power and 4 petabytes of main memory — which might be possible within the decade — then a human-scale simulation in real time will become possible.”

Ultimately, IBM wants to build a computer that “simulates and emulates the brain’s abilities for sensation, perception, action, interaction and cognition, while rivaling the brain’s low power and energy consumption and compact size.” The goal is not to create robots that act like humans, but rather to create systems that can analyze streams of continually changing raw data in real time, and thus help businesses make better decisions.

“We are trying to build intelligent business machines,” Modha says. “As the amount of raw sensory data we create continues to grow massively and the world becomes instrumented and interconnected, businesses will need intelligence to monitor, prioritize, adapt and make rapid decisions.”

The project will last multiple decades, Modha says. Today, simulations are as powerful as 4.5% of a human cerebral cortex. But with current technology a human simulation would require a billion times more energy than is consumed by the human brain itself, a statistic that illustrates just how remarkable our brains really are.

“Mother nature has discovered a computing architecture that we have yet to invent,” Modha says. The brain “is more efficient than our computers by a factor of a billion, and it has the uncanny ability to integrate sight, hearing, taste, touch, smell, and to integrate this ambiguous streaming torrent of data and act on it.”

Eventually, researchers believe they can build a computer that not only mimics the function of the human brain but does so in a package of roughly the same physical size. “One of our goals is to build a chip with 1 million neurons and 10 billion synapses per square centimeter,” Modha says. “It is extremely challenging but you don’t change the world by solving simple problems.”

The cognitive computing project started more than four years ago, and is part of the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program led by the Defense Advanced Research Projects Agency, which has provided IBM $21 million in funding.

In addition to IBM, the project involves researchers at Stanford University, Cornell University, Columbia University, University of Wisconsin-Madison, and UC-Merced. The Blue Gene/P supercomputer that performed the cat cortex simulation is based at the Lawrence Livermore National Laboratory.

IBM makes supercomputer significantly smarter than cat

By Jon Stokes | Last updated November 18, 2009 9:40 PM
An interdisciplinary team of researchers at IBM have presented at paper at the SC09 supercomputing conference describing a milestone in cognitive computing: the group’s massively parallel cortical simulator, C2, now has the ability to simulate a brain with about 4.5 percent the cerebral cortex capacity of a human brain, and significantly more brain capacity than a cat.

No, this isn’t yet another example of Kurzweil-style guesstimating about how many “terabytes” of storage a human brain has. Rather, the authors quantify brain capacity in terms of numbers of neurons and synapses. The simulator, which runs on the Dawn Blue Gene /P supercomputer with 147,456 CPUs and 144TB of main memory, simulates the activity of 1.617 billion neurons connected in a network of 8.87 trillion synapses. The model doesn’t yet run at real time, but it does simulate a number of aspects of real-world neuronal interactions, and the neurons are organized with the same kinds of groupings and specializations as a mammalian cortex. In other words, this is a virtual mammalian brain (or at least part of one) inside a computer, and the simulation is good enough that the team is already starting to bump up against some of the philosophical issues raised about such models by cognitive scientists over the past decades.

In a nutshell, when a simulation of a complex phenomenon (brains, weather systems) reaches a certain level of fidelity, it becomes just as difficult to figure out what’s actually going on in the model—how it’s organized, or how it will respond to a set of inputs—as it is to answer the same questions about a live version of the phenomenon that the simulation is modeling. So building a highly accurate simulation of a complex, nondeterministic system doesn’t mean that you’ll immediately understand how that system works—it just means that instead of having one thing you don’t understand (at whatever level of abstraction), you now have two things you don’t understand: the real system, and a simulation of the system that has all of the complexities of the original plus an additional layer of complexity associated with the models implementation in hardware and software.

The more faithful the simulation gets, the bigger an issue this becomes. The researchers allude to it in section 3.2.2 of the paper, when they describe a measurement tool they call the “BrainCam.”

“When combined with the mammalian-scale models now possible with C2,” they write, “the flood of data can be overwhelming from a computational (for example, the total amount of data can be many terabytes) and human perspective (the visualization of the data can be too large or too detailed).”

The problem described above doesn’t mean that accurate simulations are worthless, however. You can poke, prod, and dissect a brain simulation without any of the ethical or logistical challenges that arise from doing similar work on a real brain. The IBM researchers endowed the model with checkpoint-based state-saving capabilities, so that the simulation can be rewound to certain states and then moved forward again under different conditions. They also have the facility for generating MPG movies of different aspects of the virtual brain in operation, movies that you could also generate by measuring an animal’s brain but at much lower resolutions. There’s even a virtual EKG, which lets the researchers validate the model by comparing it to EKGs from real brains.

In the end, C2 is like having a (sorta) real cortex that you don’t fully understand, but that you can rewind, snap pictures of, and generally measure under different conditions so that you can do experiments on it that wouldn’t be possible (or ethical) with real brains.

Scaling and the singularity

One of the major results from the paper is that C2 exhibits “weak scaling.” In other words, as the total amount of memory in the model scales, the number of neurons and synapses that can be simulated scales roughly linearly, also. This is important, because it means that a future version of Blue Gene with two or three orders of magnitude more memory (and associated bandwidth and processing power) will be able to simulate an entire human brain.

The model also exhibits “strong scaling,” which means that increases in the amount of memory per CPU enable them to run the model faster, so that it will eventually be able to simulate a cortex in real time.

3 Replies

  1. So what exactly am I talking about here? Simply put, it is the ability to use a small computer to feed information directly to the brain. This is now the stuff of science fiction, but according to the people doing the research, not all that difficult do accomplish. The end result is seen as a nano-computer, implanted under the skin somewhere on the body. This computer would be wirelessly updated and able to wirelessly transmit information to nano-implants in the brain. Eventually you could have all the information in the world available to your brain. Everyone could in theory, know everything.

  2. There is an urgent need for thorough public debate and consultation before these devices are let loose on society.

  3. Fortsatt miniatyrisering har flyttat halvledarindustrin långt in nano rike med ledande chiptillverkarna på väg till CMOS med 22nm processteknik. Med transistorer storleken på tiotals nanometer, har forskare börjat utforska mellan biologi och elektronik genom att integrera nanoelektronik komponenter och levande celler. Även forskare har redan experimenterat med att integrera levande celler i halvledarmaterial (se “Forskarna integrera levande hjärnceller i organiska halvledare”) Annan forskning undersöker motsatt riktning, nämligen att ta nanoelektronik i levande celler.
    Studier av enskilda celler är av stor betydelse inom biomedicin. Många biologiska processer ådra sig inuti celler och dessa processer kan skilja sig från cell till cell. Utvecklingen av mikro-och nanoskala verktyg mindre än celler kommer att bidra till att förstå de cellulära maskineriet på enskild cellnivå. Alla typer av mekaniska, biokemiska, elektrokemiska och termiska processer kan studeras med hjälp av dessa enheter.
    En typisk mänsklig cell är storleken på cirka 10 kvadrat mikrometer vilket innebär att hundratals dagens minsta transistorer kan få plats i en enda cell. Om den nuvarande nivån på miniatyrisering fortsätter, år 2020 cirka 2,500 transistorer – motsvarande mikroprocessorer av första generationen av persondatorer – skulle kunna passa in i området för en typisk levande cell.

    Dra slutsatsen att antalet transistorer som får plats på området för en typisk cell (10 μm2) jämfört med år. (Bild: J. A. Plaza, IMB-CNM. (CSIC))
    “Dagens mikro-och nanoelektronik det redan skulle ge oss möjlighet att producera komplexa 3-dimensionella Microscale strukturer som sensorer och aktuatorer” José Antonio Plaza berättar Nanowerk. “Complex, mindre än celler, kan massproduceras med nanometer precision i form och mått och till låg kostnad redan. Dessutom kan många olika material (halvledare, metaller och isolatorer) vara mönstrat på kisel chip med korrekta mått och geometrier .
    Plaza, forskare vid mikro-och nanosystem Department, Instituto de Microelectrónica de Barcelona IMB-CNM (CSIC), tillsammans med ett team av kollegor, har visat att kiselkretsar mindre än celler kan produceras, samlas in och internaliseras inuti levande celler med olika tekniker (lipofection, fagocytos eller mikroinjektion) och framför allt, de kan användas som intracellulära sensorer.
    Teamet har publicerat sina resultat i senaste numret av Liten (”Intracellular kiselkretsar i levande celler”).
    Plaza påpekar att många studier har behandlat tillverkning och cellulära upptaget av olika format och organiserade mikro-och nanopartiklar. Dessa partiklar är huvudsakligen framställts genom kemisk syntes och de har visat sig ha stor genomslagskraft inom nanomedicin.
    “Däremot säger han,” kiselkretsar har visat sig ha nära ändlös tillämpningar inom många områden av det moderna livet. Därmed var frågan i vårt arbete att visa att kiselkretsar, som tillverkas i skala av mikro-och nanopartiklar, kan används som intracellulär sensorer. Dessa chip är gjorda av en typisk halvledande material – kisel – och produceras av gemensamma industriella produktionstekniker baserad på fotolitografiska processer. ”
    Rodrigo Gómez-Martínez, förste författaren i tidningen, förklarar att, jämfört med mikro-och nanopartiklar, intracellulär kiselkretsar har flera fördelar:
    - Nanometriska precision i form och mått

    - Integration av många olika material med olika dimensioner och geometrier

    - 3D nanostrukturering

    - Integration av elektronik

    - Integration av mekaniska delar

    - Och alla fördelar av MEMS och NEMS

    I sina experiment, det spanska laget fabricerade olika partier polysilicon chips och sedan välja den mest lämpade enhetstypen med laterala mått 1,5 3μm och med en tjocklek på 0,5 ìm som placeras inuti levande celler. Celler togs från Dictyostelium discoideum och mänskliga Hela Cells.
    För att ytterligare visa mångsidigheten hos tekniken, studerade de integrering av olika material i ett enda chip och deras 3D nanostrukturering förmåga genom att använda andra vanliga mikroelektronik tekniker såsom FIB fräsning.

    SEM bilder av en 3μm x 3μm x 0.5μm polysilicon intracellulära chip visas före (vänster) och efter (höger) en 3D spole nanostrukturering av FIB nanomachining. Skalstock ~ 3μm. (återges med tillstånd från Wiley-VCH Verlag)
    “Preliminära experiment inkubation Hela Cells med polysilicon marker gav låg avkastning av internaliserad intracellulära chips” säger Patricia Vázquez och Teresa Suárez, biologerna i laget. “Vi använde sedan lipofection (inkapsling av material i en lipid vesikler kallas liposome) för att få högre ICC-innehållande celler.”
    När du har installerat marker i levande celler, gjorde forskarna till att cellerna fortfarande levande och friska. De fann att över 90% av de chip som innehåller innehåller HeLa cellpopulationen förblev lönsamt 7 dagar efter lipofection.
    “Baserat på våra erfarenheter kan vi dra slutsatsen att kiselbaserade top-down tillverkade intracellulära marker kan internaliseras med levande eukaryota celler utan att störa cellernas livskraft, och functionalized flis kan användas som intracellulära sensorer eftersom de kan interagera med cytoplasman säger Plaza. “Dessa marker har samma dimensioner som många syntetiserade mikro-och nanopartiklar, men de har fördelar av kisel-chip-teknik. Intracellular chips ger högre flexibilitet och mångsidighet i form och storlek och de kan nanostrukturerade i tre dimensioner och integrerad med flera material (halvledare , isolatorer, metaller) på chip-scale nivå. ”
    De viktigaste användningsområdena för framtida intracellulära marker kommer att vara att studera enskilda celler samt tidig upptäckt av sjukdomar och nya cellulära mekanismer reparation.
    Den spanska teamet vision är att intracellulära kiselbaserade chip ger oändliga möjligheter till utformning av innovativa produkter med intracellulära applikationer.
    “Inom en snar framtid kommer nya intracellulära marker möjligt, en karakterisering och kvantifiering på encelliga nivå, och in vivo realtidsövervakning av cellulära händelser, samt med särskild inriktning på områden av åtgärder eller effektiv drug delivery inom målceller säger Plaza.
    Vad det spanska teamet har gjort är bara ett första steg mot innovativa intracellulär kiselbaserade MEMS och NEMS. Nu kommer utmaningarna för framtida forskning att utveckla ny teknik för att producera MEMS och NEMS mindre än celler (små enheter med mekaniska, elektriska, magnetiska och / eller delar kemisk).
    Det är tydligt att effekten av dessa strukturer i cellen livskraft en grundläggande fråga. Även om de första intrycken var lovande, kommer ytterligare systematiska cytotoxicitet och tester biokompatibilitet vara nödvändig om nya material eller 3D-modeller kommer att användas för intracellulära applikationer.
    “Hur dessa enheter kommer att interagera med levande celler och utföra sensorisk verksamhet är en ny fascinerande fråga.

  • misprint