The thing inside of us that makes us go up is our attitude.

Random Image

Who's Online

We have 31 guests online


Members : 2
Content : 181
Web Links : 6
Content View Hits : 243243

Design And Reuse - Headline News
The industry source for engineers and technical managers worldwide.

Design And Reuse - Industry Expert Blogs
  • MIPI I3C v1.1 - A Conversation with Ken Foust (MIPI Alliance Blog: The Wires Behind Wires - Sharmion Kerley, MIPI Director of Marketing and Membership)
    Q: What makes the newly released MIPI I3C® v1.1 different from v1.0, and why is it important to developers? I see Version 1.0 as setting a new baseline. We came together to make an interface that would dramatically simplify the integration of sensors and address many of the key pain points that all of us in the industry were dealing with when working with I2C and SPI interfaces. We think we accomplished that with v1.0—anywhere sensors are used, MIPI I3C belongs. Now v1.1 is the first update to build on that foundation. One of our original goals in launching development of MIPI I3C was to provide an upgrade path that would allow implementers to take advantage of I3C wherever I2C and SPI were being used. Those interfaces were developed a long time ago and have proliferated into several industries and several types of devices for many different use cases. Although v1.0 set the stage, there remained features needed to ensure MIPI I3C’s suitability for the very broad range of industries and use cases—such as memory management, server control and manageability, “always-on” imaging, debug communications, touchscreen and sensor device command, and power management. So, with MIPI I3C v1.1 we now have this scalable, medium-speed utility and control bus that can connect peripherals to an application processor in everything from smartphones and wearables, to high-performance servers, automotive applications and more. Some of these applications weren’t even originally envisioned when we started creating MIPI I3C. This ongoing move beyond mobile and sensing was our focus in the development of v1.1.
    View the full article HERE

  • Year in Review: 2019 Progress Builds Momentum for MIPI in Mobile and Beyond (MIPI Alliance Blog: The Wires Behind Wires - Peter Lefkin, MIPI Managing Director)
    As we ring in the new year and look ahead to exciting developments across the MIPI ecosystem, it's clear that our work in 2019 to advance important initiatives and specifications has set up the Alliance for continued success in 2020. The end of the year also marks the close of our biennial strategic priorities period. While we are starting the process to define our 2020-21 priorities, taking a moment to reflect on our previous priorities offers us a framework for celebrating accomplishments, recognizing the hard work of MIPI members and charting a path forward.
    View the full article HERE

  • 5G in 2020 (Breakfast Bytes - Paul McLellan)
    There is a famous quote, attributed to Mark Twain but more likely said by his friend Charles Dudley Warner: "Everybody talks about the weather but nobody does anything about it." I think that 5G in 2020 is going to be like that. You will hear lots of talk about 5G, but you probably won't buy a 5G phone, and if you do, you are most likely to make a 5G call if you are in Asia or Europe. I am not a believer that 5G is going to be very disruptive, especially in the next year or two. The reality of every generation of cellular technology is that it trades off increased complexity of signal processing and antenna design for better use of the bandwidth, which is the real limited resource in mobile. So higher performance and more capacity. Of course, the industry, both operators and equipment suppliers, tend to oversell the capabilities.
    View the full article HERE

  • Fast Access to Accelerators: Enabling Optimized Data Transfer with RISC-V (Sifive Blog - Shubu Mukherjee, Chief SoC Architect, SiFive)
    Domain-specific accelerators (DSAs) are becoming increasingly common in systems-on-chip (SoCs). A DSA provides higher performance per watt than a general-purpose processor by optimizing the specialized function it implements. Examples of DSAs include compression/decompression units, random number generators and network packet processors. A DSA is typically connected to the core complex using a standard IO interconnect, such as an AXI bus.
    View the full article HERE

  • PCI Express 5 vs. 4: What's New? [Everything You Need to Know] (Rambus Blog - )
    Introduction What’s new about PCI Express 5 (PCIe 5)? The latest PCI Express standard, PCIe 5, represents a doubling of speed over the PCIe 4.0 specifications. Want to know the best part in terms of feeds and speeds? We’re talking about 32 Gigatransfers per second (GT/s) vs. 16GT/s, with an aggregate x16 link bandwidth of almost 128 Gigabytes per second (GBps). Impressive, isn’t it? It is, because this speed boost is needed to support a new generation of artificial intelligence (AI) and machine learning (ML) applications as well as cloud-based workloads. Want to know a little more about AI/ML applications and cloud-based workloads? Both are significantly increasing network traffic. In turn, this is accelerating the implementation of higher speed networking protocols which are seeing a doubling in speed approximately every two years. So, we’ve taken a quick look at feeds and speeds, but what else does PCIe 5 bring to the table? You can find everything you need to know in the article below. Let’s dive right in.
    View the full article HERE

  • Open Source in 2020 (Breakfast Bytes - Paul McLellan)
    I recently wrote a couple of posts about open-source EDA tools, OpenROAD: Open-Source EDA from RTL to GDSII and 2nd WOSET Workshop on Open-Source EDA. I have also written about open-source in general, as an approach to development and an approach to business in a post from over four years ago that I think stands up well: The Paradox of Open Source. The reason I called it a paradox is that it seems to be one of the most effective ways to develop software but it doesn't really have a business model that works. All over the internet are pieces complaining that open source is broken. A particularly good one is The Cathedral and the Bizarre—a critique of twenty years of open source. The title is a play on The Cathedral and the Bazaar, a book by Eric S. Raymond (also known as "esr") who introduced the term "open source" as a replacement for "free software", which always needed to be qualified as "free as in speech, not as in beer" but was the term introduced by the pioneer in the space, Richard Stallman ("rms"). I discussed the book in my earlier post linked to above.
    View the full article HERE

  • Imagination in action at CES 2020 (With Imagination Blog - Benny Har-Even, Imagination)
    With so many companies and products vying for attention, the show floor at CES can be an overwhelming place. If you’re in Las Vegas and want to get away from all of that for a while then you can make your way to the Venetian hotel (Floor 27, Suite 140) where you can come and talk to us about our plans for the year in GPU, AI and connectivity. The demos on show include: Surround View: A demo of a production-quality surround-view application for cars. It takes inputs from four cameras which are merged into one perspective correct image on a PowerVR Series6 GPU, running on a Renesas R-Car H3 platform. These images are overlaid onto the 3D generated car, creating a unique highly realistic 3D car model that reflects its surroundings in real-time. Pose Estimation: Pose estimation is an advanced form of computer vision that enables a computing device to recognise the exact position of a person in space, such as whether they are sitting standing, crouching etc. This is not trivial to do in standard computer vision and can be done much faster using a neural network. It can be used for entertainment – such as in games, adding realism to video games, or even for security and surveillance purposes. Our demo shows the PoseNet model running fast and efficiently on our PowerVR Series2NX AX2185 neural network accelerator, at the same time as other networks.
    View the full article HERE

  • TSMC bags up Apple for 2020 (Mannerisms - David Manners)
    TSMC will remain the sole foundry partner of Apple for the chips for its 2020 iPhone series, reports the Chinese-language Commercial Times. The chips will be made on TSMC’s 5nm EUV process and production starts in Q2 2020.
    View the full article HERE

  • Imagination: Predictions 2020 (With Imagination Blog - Jo Jones, Imagination)
    As we get into 2020, we know that once again it’s time for the team at Imagination to gaze into its crystal ball to see if we can discern what the year will bring. This time we’ve divided our predictions into market segments. Read on for our insights. Automotive Progression of automotive safety Some predictions you don’t need a crystal ball for and one of these is the evolution of safety technologies in automotive. The Euro NCAP (New Car Assessment Program) has already published a roadmap of what features will be required in order to grade new vehicles and this tells us a lot of what we can expect to see. In 2020 specifically, there will be a focus on safety between vehicles through vehicle-to-everything (V2X) communication, allowing vehicles to exchange safety and real-time environment information. Another technology that will come to the fore is driver monitoring, which is rapidly maturing thanks to advances is facial analysis that detects how alert and attentive the driver is. These technologies utilise key Imagination IP from the Ensigma and AI product families. Goodbye fuel, hello electrification The move to alternative fuel vehicles will accelerate in 2020 with over 50 new electric-powered models coming out. Tesla has just overtaken BYD Auto as the world’s biggest electric vehicle (EV) supplier. However, the likes of Volvo, VW, Porsche, Audi, Ford, Honda, Hyundai and others will put pressure on its position. Registration for electric cars was up 125% in 2019 and this growth will increase in 2020 as a broad range of cars become available. Significant financial incentives to move to EVs increase the likelihood of this happening while consumer concerns will reduce as battery technology continues to yield a 6-8% year-on-year improvement. Fast charging technology will start to be rolled out more broadly, easing another of the main issues for consumers.
    View the full article HERE

  • PCI-SIG in 2019: A Year in Review (PCI-SIG Blog - Kurt Lender, Marketing Work Group Co-Chair, PCI-SIG)
    As I look back on 2019, I’m proud to report that it has been a banner year for PCI-SIG®. This year we saw the progression of the PCI Express® (PCIe®) specification ecosystem, prioritized several education initiatives and promoted PCIe architecture around the world. Though it’s difficult to narrow the list down, my top eight 2019 milestones are below. The PCI Express 4.0 specification entered official Compliance testing and is being adopted by industry-leading vendors. At this year’s PCI-SIG Compliance Workshop #110, our attendees participated in the first official PCIe 4.0 specification compliance tests and over 40 entries have been added to the PCIe 4.0 Integrators List. PCIe 4.0 technology reaches 16GT/s and products are entering a variety of market segments.
    View the full article HERE

  • RISC-V and Arm - signs of an open heterogeneous world (UltraSoC Blog - Andy Gothard, UltraSoC)
    I’m back in San Jose for the second time in just a couple of months, it’s been interesting to compare the two events I’ve attended here recently – Arm Techcon and the 2019 RISC-V Summit. The worlds of RISC-V and Arm have often in the last couple of years tried to pretend that they’re hermetically sealed, separate environments. But increasingly, it’s becoming clear that nothing could be further from the truth. At UltraSoC we’ve been aware for some time that the future is heterogeneous. For RISC-V, that means coexisting with, rather than replacing, Arm, and taking a more holistic, system-level view of the ecosystem. And after this year’s TechCon, it seems that ‘heterogeneous’ will also be taking on a new meaning. For a long time, the techie in-joke was that for Arm, ‘heterogeneous computing’ meant a chip with an M3 and A9 on the same die (for those not in the know, the M3 and A9 are both Arm processors). But the big news back in October was Arm’s announcement that it is opening up its instruction set so that people can add custom instructions.
    View the full article HERE

  • Run Faster, Welcome 800G and Terabit Speeds with Ethernet 802.3ck (VIP Experts Blog - Synopsys)
    IP traffic has been growing at a rate many could not have imagined. Driven by expanding Internet users and devices that yield faster wireless and fixed broadband access, the expeditious ethernet data rate has now reached to 400G. From 1Gbps in 1997, to 10Gbps in 2004, 100 Gbps in 2010, it took a while for the next set up to 400 Gbps. Steered by the ever-increasing internet traffic, there is always a need for more bandwidth. Evolution of Ethernet IEEE pulled the existing standards to frame a pathway to 400G. The 100 Gbps based on four parallel lanes of 25 Gbps was the starting point for 400G development. However, a method to increase the serial rate was surely needed for 400G. The data rate of 400G with 16 x 25 Gbps parallel lanes would require 32 fibers per link for transmit and receive. Multiple parallel fibers solution was acceptable for short-distance links but not for longer cable lengths because of the following reasons: The number of transmission lines cannot be increased without a limit, because beyond a frequency, the signal transit time could not be equal for all signal lines. Another point to consider is the electromagnetic interference with other serial lines. The higher the frequency, the more the probability of interference. Lastly, the larger number of cables will drastically affect the cost for longer distances.
    View the full article HERE

  • What's Happening in RISC-V Land? (Breakfast Bytes - Paul McLellan)
    Last week was IEDM, the International Electronic Devices Meeting. I will write about that later this week, because last week was also the RISC-V Summit, which was originally scheduled for the week before in the Santa Clara Convention Center, but got pushed out a week and moved to the San Jose Convention Center. IEDM is in San Francisco, so I mostly attended IEDM but I came down to San Jose for much of the first day of the RISC-V Summit. I'll cover that today. I have been following the RISC-V story since EDPS 2016 in Monterey when I first heard about it. If you don't know what RISC-V is (and you might therefore not know it is pronounced "risk five") then see my posts: RISC-V—Instruction Sets Want to Be Free RISC-V Summit Preview: Pascal or Linux? In January this year, in a post titled RISC-V Cores: SweRV and ET-Maxion, I wrote: "The one-sentence summary of the state of RISC-V is that it is already dominant in academia, and has some traction with the defense industry, too. I doubt any chips will be built in academia that are not RISC-V-based, and it is clear that a lot of ideas for things like hardware security will be prototyped in RISC-V. The big question is how significant it will be in the commercial world." That's a pretty good summary of what I saw in this year's summit. There is lots of progress in the standardization process, building out the software ecosystem, growing the RISC-V foundation, and more. When Rick O'Connor was running the RISC-V foundation (he's now running the OpenHW Group) he told me that there was a huge funnel of products from big names that hadn't yet been announced. But not a lot that met that description was announced this year, apart from Samsung (see below).
    View the full article HERE

  • Plundervolt steals keys from cryptographic algorithms (Rambus Blog - )
    An international team of white hat researchers has successfully corrupted the integrity of Intel Software Guard Extensions (SGX) on Intel Core processors with a software-based fault injection attack aptly dubbed “Plundervolt.” Using Plundervolt, attackers can recover keys from cryptographic algorithms (including the AES-NI instruction set extension) and induce memory safety vulnerabilities into bug-free enclave code.
    View the full article HERE

  • System in Package, Why Now? (Breakfast Bytes - Paul McLellan)
    At HOT CHIPS this summer, one of the things I noticed was just how many of the designs being presented were in some form of 3D packaging with multiple die. I wrote about many of them in my post HOT CHIPS: Chipletifying Designs. At last year's HOT CHIPS, I don't remember any designs being presented like this. So that raises the obvious question, why now? Moore and More For over 50 years, the semiconductor industry has enjoyed the benefits of Moore's Law. But now the economics of semiconductor scaling are over. Moore's Law was mainly an economic law. If you read his original article (based on four datapoints!), he points out that the economically optimal number of transistors on a chip was doubling every couple of years. Of course, underlying it was the development of technology to make this be true, and until a few years ago that continued. The very high-level economic proposition was that each process generation doubled the number of transistors in the same area, at a cost increase of just 15%, leaving a cost saving of 35% per transistor. But now transistors are more expensive each generation, since the processes are so complex and the capital investment to build a working fab (these days, including EUV steppers at over $100M each). So we have a process roadmap from 7nm, to 5nm, to 3nm, and a couple of generations after that. But the economics are such that these processes will not just be more expensive per wafer, as has been true for decades, but more expensive per transistor.
    View the full article HERE

  • RISC-V Foundation moves to Switzerland (Mannerisms - David Manners)
    The four year-old RISC-V Foundation is moving from Delaware to Switzerland to allay foreign members’ fears of possible disruption to their continued development of the open-source technology. Although the non-profit foundation does not own the technology, Calista Redmond (pictured) the foundation’s chief executive, says foreign members of the foundation have said they’d be ‘more comfortable’ if the foundation was not incorporated in the US. The foundation has 325 members including Alibaba, Huawei and Qualcomm.
    View the full article HERE

  • Shattering the neural network memory wall with Checkmate (Rambus Blog - )
    A recent paper published on arXiv by a team of UC Berkeley researchers observes that neural networks are increasingly bottlenecked and constrained by the limited capacity of on-device GPU memory. Indeed, deep learning is constantly testing the limits of memory capacity on neural network accelerators as neural networks train with high-resolution images, 3D point-clouds and long Natural Language Processing (NLP) sequence data. “In these applications, GPU memory usage is dominated by the intermediate activation tensors needed for backpropagation. The limited availability of high bandwidth on-device memory creates a memory wall that stifles exploration of novel architectures,” the researchers explain. “One of the main challenges when training large neural networks is the limited capacity of high-bandwidth memory on accelerators such as GPUs and TPUs. Critically, the bottleneck for state-of-the-art model development is now memory rather than data and compute availability and we expect this trend to worsen in the near future.” As the researchers point out, some initiatives to address this bottleneck focus on dropping activations as a strategy to scale to larger neural networks under memory constraints. However, these heuristics assume uniform per-layer costs and are limited to simple architectures with linear graphs. As such, the UC Berkeley team uses off-the-shelf numerical solvers to formulate optimal rematerialization strategies for arbitrary deep neural networks in TensorFlow with non-uniform computation and memory costs. In addition, the UC Berkeley team demonstrates how optimal rematerialization enables larger batch sizes and substantially reduced memory usage – with minimal computational overhead across a range of image classification and semantic segmentation architectures.
    View the full article HERE

  • Softbank looks for $2.76bn loan (Mannerisms - David Manners)
    Softbank, fresh from its $9 billion bail-out of WeWork, which followed a $10 billion investment in the company, is reported to be seeking a $2.76 billion loan from three Japanese banks. The banks Mizuho, Mitsubishi and Sumitomo Mitsui are, reportedly, concerned about lending to Softbank because some have lent to Softbank before and the banks have also invested in Softbank’s Vision Fund which took a hit in value from WeWork’s aborted IPO. Softbank is said to have a $100 billion stake in Alibaba – but doubts have been raised about how potentially liquid that investment might be.
    View the full article HERE

  • Demystifying PCIe PIPE 5.1 SerDes Architecture (Express Yourself - Scott Knowlton)
    Artificial intelligence and machine learning are rapidly penetrating a wide spectrum of devices, driving the re-architecture of SoC designs and requiring more memory space and higher bandwidth to transfer and process data. This change requires higher speed interfaces and wider buses, paving the path for enhancements in the latest PCIe protocol specifications, as well as upgrades in PIPE (PHY Interface for the PCI Express) specification as the preferred PHY interface. PIPE specification has evolved to version 5.1.1 not only to match the latest specifications but also to scale up for future enhancements in the protocols. SerDes architecture makes a PIPE 5 PHY protocol agnostic with all the protocol specific logic shifted to the controller. This simplifies the PHY design and allows it to be shared easily by different protocol stacks. SerDes architecture for PIPE interface achieves scalability by introducing several key changes to the responsibilities of the Physical Coding Sublayer (PCS) and Media Access Layer (MAC), along with updates to the signaling interface.
    View the full article HERE

  • Intel apologises (again) (Mannerisms - David Manners)
    It’s rare for Intel to apologise for anything and then, like London buses, two come along in quick succession. In October 2018 Intel’s CEO wrote to customers apologising for the shortage of CPUs which had been getting worse all through that summer. ”We’re taking the following actions,” wrote Bob Swan, ‘we are investing a record $15 billion in capital expenditures in 2018, up approximately $1 billion from the beginning of the year. We’re putting that $1 billion into our 14nm manufacturing sites in Oregon, Arizona, Ireland and Israel. This capital along with other efficiencies is increasing our supply to respond to your increased demand. We’re making progress with 10nm. Yields are improving and we continue to expect volume production in 2019.”
    View the full article HERE