sharetrader
Page 2 of 35 FirstFirst 12345612 ... LastLast
Results 11 to 20 of 343
  1. #11
    Senior Member
    Join Date
    Oct 2013
    Posts
    760

    Default

    Quote Originally Posted by SCHUMACHER View Post
    i Think you will find this is bigger than what you may think - they are talking with fortune 500 companies at the moment and they are working on the next milestones - i expect this to head back up now as we seeing today and have a target of 30c by end of week
    Tech is probably the one sector where i'm able to read and understand the announcements due to my career, not underestimating this just being careful not to repeat last mistakes. Haven't bought anything in a year or so, picked up gold/miner, tech and biomed in the past few days. Very much green amongst the sea of ASX red this week

  2. #12
    Member
    Join Date
    Mar 2006
    Location
    Auckland, New Zealand
    Posts
    482

    Default

    Quote Originally Posted by clip View Post
    Tech is probably the one sector where i'm able to read and understand the announcements due to my career, not underestimating this just being careful not to repeat last mistakes. Haven't bought anything in a year or so, picked up gold/miner, tech and biomed in the past few days. Very much green amongst the sea of ASX red this week

    Fair comments - pays to be risk adverse but commodities for me are scarier - the CMI commodities index is sick looking and globally growth is worsening - technology sectors is where the experts are saying the best performing sector even biotech moves slower.

    Technology sector is doing well - anyway my prediction came true we hit 31c so i would expect a bit of a sell off now after a run from 27c
    29.5c is now strong support so should hold there
    \"if women didn,t exist , all the money in the world would mean nothing\" Aristotle Anasis.

    \"The trend is your friend\"

    \"A mans reach should always extend beyond his grasp" J.F Kennedy

  3. #13
    Senior Member
    Join Date
    Oct 2013
    Posts
    760

    Default

    Yep I have taken some profits am getting close to free carrying now, dropping a bit now but not planning to sell off with expected milestone ann's this month

  4. #14
    Senior Member
    Join Date
    Oct 2013
    Posts
    760

    Default

    BRN have released an investor presentation today which is well worth reading for anyone mildly interested, it summarizes their technology/goals/key upcoming delivery dates in a way that is not too technical/should be understandable by most people. Up another 14% today, haven't had a red day in almost 2 weeks. http://www.asx.com.au/asx/statistics...idsId=01668917

    Until they meet milestone 2 (putting the technology onto a hardware chip) which is expected in the next month, the technology is still experimental/ideological but all information they have released so far suggests they are on track. They have demonstrated it's capabilities in computer simulations/models only. This is still a speculative stock at this stage (regardless of how confident current holders may be) - DYOR, fair disclosure etc

    Their website also http://brainchipinc.com/

  5. #15
    IMO
    Join Date
    Aug 2010
    Location
    Floating Anchor Shoals
    Posts
    9,809

    Default

    S/P taken a dive ; pretty well down to the 1 for 26 non renounceable offer @ 15c in April.When will they start making sales?Running low on cash hence the re $4mill raise;$1.75v mill underwritten for a 6% fee. Too early ,too risky for me.
    Prospectus - Non Renounceable Rights Issue-BRN.AX
    Investor Presentation-BRN.AX
    Annual Report to shareholders-BRN.AX Appendix 4C - quarterly-BRN.AX
    Company Update-BRN.AX
    Virtual Roadshow Presentation-BRN.AX
    RIGHTS ISSUE UNDERWRITTEN FOR $1.75M AND SHAREHOLDER SUPPORT-BRN.AX
    Last edited by Joshuatree; 10-05-2016 at 08:38 PM.

  6. #16
    Peer
    Join Date
    Jan 2016
    Location
    Christchurch
    Posts
    62

    Default

    There is this little University in the USA that has developed their own "Brain Chip", going the other way, capable of doing deep learning. I can't help but feel that there is more hype than reality around Brainchip, many buzz words, promises, not much in the way of progress (reminds me of In***net to be honest).

    Personally, I'd go with that little University winning this race, you might of heard of them... MIT. http://news.mit.edu/2016/neural-chip...e-devices-0203

  7. #17
    IMO
    Join Date
    Aug 2010
    Location
    Floating Anchor Shoals
    Posts
    9,809

    Default

    Many thanks Knot; even this technophobe can understand it; have posted it
    Larry hardesty MIT News office

    In recent years, some of the most exciting advances in artificial intelligence have come courtesy of convolutional neural networks, large virtual networks of simple information-processing units, which are loosely modeled on the anatomy of the human brain.

    Neural networks are typically implemented using graphics processing units (GPUs), special-purpose graphics chips found in all computing devices with screens. A mobile GPU, of the type found in a cell phone, might have almost 200 cores, or processing units, making it well suited to simulating a network of distributed processors.
    At the International Solid State Circuits Conference in San Francisco this week, MIT researchers presented a new chip designed specifically to implement neural networks. It is 10 times as efficient as a mobile GPU, so it could enable mobile devices to run powerful artificial-intelligence algorithms locally, rather than uploading data to the Internet for processing.
    Neural nets were widely studied in the early days of artificial-intelligence research, but by the 1970s, they’d fallen out of favor. In the past decade, however, they’ve enjoyed a revival, under the name “deep learning.”
    “Deep learning is useful for many applications, such as object recognition, speech, face detection,” says Vivienne Sze, the Emanuel E. Landsman Career Development Assistant Professor in MIT's Department of Electrical Engineering and Computer Science whose group developed the new chip. “Right now, the networks are pretty complex and are mostly run on high-power GPUs. You can imagine that if you can bring that functionality to your cell phone or embedded devices, you could still operate even if you don’t have a Wi-Fi connection. You might also want to process locally for privacy reasons. Processing it on your phone also avoids any transmission latency, so that you can react much faster for certain applications.”
    The new chip, which the researchers dubbed “Eyeriss,” could also help usher in the “Internet of things” — the idea that vehicles, appliances, civil-engineering structures, manufacturing equipment, and even livestock would have sensors that report information directly to networked servers, aiding with maintenance and task coordination. With powerful artificial-intelligence algorithms on board, networked devices could make important decisions locally, entrusting only their conclusions, rather than raw personal data, to the Internet. And, of course, onboard neural networks would be useful to battery-powered autonomous robots.
    Division of labor
    A neural network is typically organized into layers, and each layer contains a large number of processing nodes. Data come in and are divided up among the nodes in the bottom layer. Each node manipulates the data it receives and passes the results on to nodes in the next layer, which manipulate the data they receive and pass on the results, and so on. The output of the final layer yields the solution to some computational problem.
    In a convolutional neural net, many nodes in each layer process the same data in different ways. The networks can thus swell to enormous proportions. Although they outperform more conventional algorithms on many visual-processing tasks, they require much greater computational resources.
    The particular manipulations performed by each node in a neural net are the result of a training process, in which the network tries to find correlations between raw data and labels applied to it by human annotators. With a chip like the one developed by the MIT researchers, a trained network could simply be exported to a mobile device.
    This application imposes design constraints on the researchers. On one hand, the way to lower the chip’s power consumption and increase its efficiency is to make each processing unit as simple as possible; on the other hand, the chip has to be flexible enough to implement different types of networks tailored to different tasks.
    Sze and her colleagues — Yu-Hsin Chen, a graduate student in electrical engineering and computer science and first author on the conference paper; Joel Emer, a professor of the practice in MIT’s Department of Electrical Engineering and Computer Science, and a senior distinguished research scientist at the chip manufacturer NVidia, and, with Sze, one of the project’s two principal investigators; and Tushar Krishna, who was a postdoc with the Singapore-MIT Alliance for Research and Technology when the work was done and is now an assistant professor of computer and electrical engineering at Georgia Tech — settled on a chip with 168 cores, roughly as many as a mobile GPU has.
    Act locally
    The key to Eyeriss’s efficiency is to minimize the frequency with which cores need to exchange data with distant memory banks, an operation that consumes a good deal of time and energy. Whereas many of the cores in a GPU share a single, large memory bank, each of the Eyeriss cores has its own memory. Moreover, the chip has a circuit that compresses data before sending it to individual cores.
    Each core is also able to communicate directly with its immediate neighbors, so that if they need to share data, they don’t have to route it through main memory. This is essential in a convolutional neural network, in which so many nodes are processing the same data.
    The final key to the chip’s efficiency is special-purpose circuitry that allocates tasks across cores. In its local memory, a core needs to store not only the data manipulated by the nodes it’s simulating but data describing the nodes themselves. The allocation circuit can be reconfigured for different types of networks, automatically distributing both types of data across cores in a way that maximizes the amount of work that each of them can do before fetching more data from main memory.
    At the conference, the MIT researchers used Eyeriss to implement a neural network that performs an image-recognition task, the first time that a state-of-the-art neural network has been demonstrated on a custom chip.
    “This work is very important, showing how embedded processors for deep learning can provide power and performance optimizations that will bring these complex computations from the cloud to mobile devices,” says Mike Polley, a senior vice president at Samsung’s Mobile Processor Innovations Lab. “In addition to hardware considerations, the MIT paper also carefully considers how to make the embedded core useful to application developers by supporting industry-standard [network architectures] AlexNet and Caffe.”

  8. #18
    Member
    Join Date
    Dec 2015
    Posts
    69

    Default

    I'm still very bullish on Brainchip.
    Deep learning is making a lot of noise, but it's old technology. The power/processing requirements are too high for small embedded and remote devices.
    Rapid Autonomous Learning (Brainchip) is in its infancy, so will take a little while to get going, but it's definitely the future. The disruption factor is just too significant for it to be ignored.
    They've kept all their promises thus far and delivered significant progress in a short time, and the advisory board alone makes me confident.

    Patience is required. The hype got a bit ahead of itself, and some people got burnt bigtime, so the sentiment is low. If you believe in the technology then it would be called a perfect time to accumulate.

  9. #19
    Senior Member
    Join Date
    Aug 2008
    Posts
    590

    Default

    Up 143% on good news and indications of more to come...

  10. #20
    Advanced Member
    Join Date
    Sep 2013
    Posts
    1,917

    Default

    Yes - BRN up 15c today 142.9% to $0.255 on $14,775K

    BrainChip to roll out game protection technology at major Las
    Vegas casino following successful completion of trial
    Highlights
    BrainChip will roll out its innovative casino table security technology at one of
    Las Vegas’ biggest casinos following the successful completion of the phase one
    trial
    The roll out of BrainChip’s Spiking Neural Networkbased SNAPvision
    technology will initially be across all baccarat tables in the trial casino with a
    second casino to be added in the next month, followed by a roll out to more of
    the group’s casinos
    The roll out will deliver an immediate and growing revenue stream to
    BrainChip
    Second product application is currently being trialled across a range of table
    games including Blackjack, Blackjack Switch and Ultimate Poker
    Huge market opportunity with the casino management industry, which
    includes security and surveillance, expected to grow to US$4.5 billion in size by
    2018

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •