Stuttgart/Berlin. What does a desirable future that is worth living look like? How can individuality and digital transformation be reconciled? How can trust be established between humans and machines? A series of “FutureInsight” debates from Mercedes-Benz address questions like these. In these debates, Mercedes-Benz experts discuss such questions around the theme of mobility with academics,… Continue reading Mercedes-Benz FutureInsight: “Human first”: empathy as anchor in the digital transformation
Category: Official Press Release
November 26 Suzuki Achieves Accumulated Automobile Production of 2 Million Units in Pakistan
26 November 2018 Suzuki Achieves Accumulated Automobile Production of 2 Million Units in Pakistan Pak Suzuki Motor Co., Ltd. (Pak Suzuki), a subsidiary of Suzuki Motor Corporation for production and sales of automobiles and motorcycles in Pakistan, achieved accumulated automobile production of 2 million units in August 2018. To commemorate this milestone, a ceremony was… Continue reading November 26 Suzuki Achieves Accumulated Automobile Production of 2 Million Units in Pakistan
The Future of Autonomous Vehicles: Part I – Think Like a Robot, Perceive Like a Human
By James R. Doty, MD and Blair LaCorte
For over three decades, I’ve studied and performed surgery on the human brain. I have always been fascinated by the power, plasticity and adaptability of the brain and by how much of this amazing capacity is dedicated to processing and interpreting data we receive from our senses. With the rapid ascension of Artificial Intelligence (AI), I began to wonder how developers would integrate the complex, multi-layers of human perception to enhance AI’s capabilities. I have been especially interested in how this integration would be applied to robots and autonomous vehicles. It became clear the artificial intelligence that will be needed to drive these vehicles will require artificial perception that is modeled after the greatest perception engine on the planet — the human visual cortex. These vehicles will need to think like a robot, but perceive like a human.
To learn more and to better understand how this level of artificial perception will be created, I recently became advisor to AEye, a company developing cutting edge artificial perception and self-driving technologies, to help them use knowledge of the human brain to better inform their systems. This is known as biomimicry: the concept of learning from and then replicating natural strategies from living systems and beings (plants, animals, humans, etc.) to better adapt design and engineering. Essentially, biomimicry allows us to fit into our existing environment and evolve in the way life has successfully done for the past few billion years. But why is incorporating biomimicry and aspects of human perception integral to the development and success of autonomous vehicles?
Because nothing can take in more information and process it faster and more accurately than the human perception system. Humans classify complex objects at speeds up to 27 Hz, with the brain processing 580 megapixels of data in as little as 13 milliseconds. If we continue using conventional sensor data collection methods, we are more than 25 years away from having AI achieve the capabilities of the human brain in robots and autonomous vehicles. Therefore, to facilitate self-driving cars to safely move independently in crowded urban environments or at highway speeds, we must develop new approaches and technologies to meet or exceed the performance of the human brain. The next question is: how?
Orthogonal data matters
(Creating an advanced, multi-dimensional data type)
Orthogonal data refers to complimentary data sets which ultimately give you more quality information about an object or situation than each would alone, allowing us to make efficient judgements about what in our world is important, and what is not. Orthogonality concepts for high information quality are well understood and rooted in disciplines such as quantum physics where linear algebra is employed and orthogonal basis sets are the minimum pieces of information one needs to represent more complex states without redundancy. When it comes to perception of moving objects, two types of critical orthogonal data sets are often required — spatial and temporal. Spatial data specifies where an object exists in the world, while temporal is where an object exists in time. By integrating these data sets along with other complementary data sets such as color, temperature, sound, smell, etc. our brains generate a real-time model of the world around us, defining how we experience it.
The human brain takes in all kinds of orthogonal data naturally, decoupling and reassembling information instantaneously, without us even realizing it. For example, if you see that a baseball is flying through the air towards you, your brain is gathering all types of information about it, such as spatial (the direction of where the ball is headed) and temporal (how fast it’s moving). While this data is being processed by your visual cortex “in the background” all you’re ultimately aware of is the action you need to take, which might be to duck. The AI perception technology that is able to successfully adopt the manner by which the human brain captures and processes these types of data sets will dominate the market.
Existing robotic sensory data acquisition systems have focused only on single sensor modalities (camera, LiDAR, radar) and only with fixed scan patterns and intensity. Unlike humans, these systems have not learned nor have the ability to efficiently process and optimize 2D and 3D data in real-time while both the sensor and detected objects are in motion. Therefore, they cannot use real-time orthogonal data to learn, prioritize, and focus. To effectively replicate the multi-dimensional sensory processing power of the human visual cortex will require a new approach to thinking about how to capture and process sensory data.
AEye is pioneering one such approach. AEye calls its unique biomimetic system iDAR (Intelligent Detection and Ranging). AEye’s iDAR is an intelligent artificial perception system that physically fuses a unique, agile LiDAR with a hi-res camera to create a new data type they call Dynamic Vixels. These Dynamic Vixels are one of the ways in which AEye acquires orthogonal data. By capturing x, y, z, r, g, b data (along with SWIR intensity), these patented Dynamic Vixels are uniquely created to biomimic the data structure of the human visual cortex. Like the human visual cortex, the intelligence of the Dynamic Vixels is then integrated in the central perception engine and motion planning system which is the functional brain of the vehicle. They are dynamic because as they actively interrogate a scene and adjust to changing conditions, such as increasing the power level of the sensor to cut through rain, or revisiting suspect objects in the same frame to identify obstacles. Better data drives more actionable information.
Not all objects are created equal
(See everything, and focus on what is important)
Humans continuously analyze their environment, always scanning for new objects, then in parallel and as appropriate focus in on elements that are either interesting, engaging, or potentially pose a threat. We process at the visual cortex fast, with incredible accuracy, and with very little of the brain’s immense processing power. If a human brain functioned as autonomous vehicles do today, we would not have survived as a species.
In his book The Power of Fifty Bits, Bob Nease writes of the ten million bits of information the human brain processes each second, but how only fifty bits are devoted to conscious thought. This is due to multiple evolutionary factors, including our adaptation to ignore autonomic processes like our heart beating, or our visual cortex screening out less relevant information in our surroundings (like the sky) to survive. It is an intelligent system design.
This is the nature of our intelligent vision. So, while our eyes are always scanning and searching to identify new objects entering a scene, we focus our attention on objects that matter as they move into areas of concern, allowing us to track them over time. In short, we search a scene, consciously acquire the objects that matter, and track them as required.
As discussed, current autonomous vehicle sensor configurations utilize a combination of LiDAR, cameras, ultrasonics, and radar as their “senses” that are serial collection (one way) and are limited to fixed patterns of search. These “senses” collect as much data as possible, which is then aligned, processed, and analyzed long after the fact. This post-processing is slow and does not allow for situational changes to how sensory data is captured in real-time. Because these sensors don’t intelligently interrogate, up to 95% of the sensory data currently being collected is thrown out as it is either irrelevant or redundant at the time it is processed. This act of triage also itself comes with a latency penalty. At highway speeds, this latency results in a car moving more than 20 feet before the sensor data has been fully processed. Throwing away data you don’t need with the goal of being efficient is inefficient. A better approach exists.
The overwhelming task of sifting through this data — every tree, curb, parked vehicle, the sky, the road, leaves on trees, and other static objects — also requires immense power and data processing resources, which slows down the entire system significantly, and introduces risk. These systems’ goal is to focus on everything and then try to analyze each item in their environment, after the fact, at the expense of timely action. This is the exact opposite of how humans process spatial and temporal data in situations that we associate with driving.
AEye’s iDAR teaches autonomous vehicles to “search, acquire, and track” objects as we do. By defining new data and sensor types that more efficiently communicate actionable information while maintaining the intelligence to analyze this data as quickly and accurately as possible. AEye’s iDAR enables this through its unique foundational solid-state agile LiDAR. Unlike standard LiDAR, AEye’s agile LiDAR is situationally adaptive so that it can modify scan patterns and trade resources such as update rate, resolution, and max range detection among others. This enables iDAR to dynamically adjust as it optimally searches a scene, conserve power and apply that power to efficiently identify and acquire critical objects, and track these objects over time. iDAR’s unique ability to intelligently use power to search, acquire, and track scenes helps identify that the object is a child walking into the street or that it is a car entering the intersection and accelerating high speed. Doing this in real-time is the difference between a safe journey and an avoidable tragedy.
Humans Learn Intuitively
(Feedback loops enable intelligence)
As we have discussed, the human visual cortex can scan ay 27Hz (much faster than current sensors on autonomous vehicles, which average around 10Hz). The brain naturally gather..
AEye’s iDAR Shatters Both Range and Scan Rate Performance Records for Automotive Grade LiDAR
Company Simultaneously Closes $40 Million Series B Funding to Fuel Global Expansion
“The test conducted by AEye delivered impressive results…This is an outstanding achievement that demonstrates the true potential of perception systems to reliably and accurately detect and track objects at great range.”
Pleasanton, CA – November 19, 2018 – AEye, a world leader in artificial perception systems and the developer of iDAR™, today announced a major breakthrough in long-range threat detection and safety. In performance specification tests monitored and validated by VSI Labs, one of the nation’s leading automated vehicle technology advisors, AEye’s iDAR system detected and tracked a truck at 1,000 meters, or one kilometer – four to five times the distance current LiDAR systems are able to detect. AEye’s test sets a new benchmark for solid-state LiDAR range, and comes one month after AEye announced a 100Hz scan rate – setting a new speed record for the industry.
The company simultaneously announced $40M in Series B funding, led by Taiwania Capital. The round was significantly oversubscribed and includes multiple global automotive OEMs, Tier 1s, and Tier 2s to be formally announced at CES in January. In addition to Taiwania Capital, existing investors Kleiner Perkins, Intel Capital, Airbus Ventures and Tychee Partners also participated.
New Range and Scan Rate Records Key to Autonomous Automotive and Trucking Safety
Using AEye’s standard iDAR sensor, the company set up a formal test, monitored by VSI Labs, the leading research and development resource for active safety and automated vehicle technologies. The test was structured to establish and verify the range and scan rates of the iDAR system.
The test was conducted on the runway of an airport in Byron, California in order to isolate targets to better measure and calibrate iDAR’s performance. To test range, a standard 20-foot moving truck was tracked and continuously scanned down the length of the 914 meter runway. At the end of the runway, the iDAR system was fully able to continuously detect, and track the movements of the vehicle as well as detect runway signs and markers en route. The AEye sensor vehicle was then taken off the runway to extend the available test range to over 1000m, where iDAR continued to track the truck without difficulty.
“The test conducted by AEye delivered impressive results,” said Sara Sargent, senior engineer at VSI Labs. “We monitored the performance and the truck was clearly identifiable and visible at 1 kilometer. We were also able to verify that AEye’s iDAR system achieves scan rates of 100Hz and that the fusion of the camera and LiDAR in the iDAR sensor produces accurate true color real-time point clouds in the form of Dynamic Vixels. This is an outstanding achievement that demonstrates the true potential of perception systems to reliably and accurately detect and track objects at great range.”
iDAR and Biomimicry
AEye’s iDAR is an intelligent artificial perception system that physically fuses an agile, solid-state LiDAR with a hi-res camera to create a new data type called Dynamic Vixels. These Dynamic Vixels are the result of real-time integration of iDAR’s Agile LiDAR and a low-light camera in the IDAR sensor, not post fusion of a separate camera and LiDAR system after the scan. By capturing x, y, z, r, g, b data, Dynamic Vixels are uniquely created to “biomimic” the data structure of the human visual cortex. Better data drives vastly superior performance and delivers more accurate information. AEye’s use of Biomimicry is more fully explored by Dr. James Doty, world renowned neurosurgeon and clinical professor in the Department of Neurosurgery at Stanford University, in an article he recently published on Medium.
“After establishing a new standard for LiDAR scan speed, we set out to see just how far we could accurately search, acquire and track an object such as a truck”, said Blair LaCorte, Chief of Staff at AEye. “The iDAR system performed as we expected. We detected the truck with plenty of signal to identify it as an object of interest, and then easily tracked it as it moved over 1000m away. We now believe that with small adaptations, we can achieve range performance of 5km to 10km or more. These results have significant implications for the autonomous trucking and Unmanned Aircraft Systems (UAS) markets, where sensing distance needs to be as far as possible and potential threats identified as early as possible to achieve safe, reliable vehicle autonomy.”
New Funds Fuel Company’s Global Expansion
In addition, AEye announced the close of its Series B round, bringing the company’s total funding to over $61 million. The funds will be used to scale AEye’s operations to meet global demand for the company’s artificial perception systems for autonomous vehicles. AEye is uniquely structured to effectively scale through partnerships with contract manufacturers and Tier 1s on a global basis. This has allowed the company to focus on its core design and innovation competencies, avoiding the costs of building manufacturing capacity, while optimizing investment dollars on higher value activities. AEye’s growth has been fueled by its ability, as a software driven platform, to provide artificial perception systems that address both ADAS and Mobility solutions and engagements with customers and partners in Europe, North America, and Asia.
“This funding marks an inflection point for AEye, as we scale our staff, partnerships and investments to align with our customers’ roadmap to commercialization,” said Luis Dussan, AEye founder and CEO. “Our strategic relationship with Taiwania will serve as a gateway to Asia, with valuable manufacturing, logistics and technology resources that will accelerate our ability to address the needs of a global market. We intend to launch our next generation product at CES, which we believe will help OEMs and Tier 1s accelerate their products and services by delivering market leading performance at the lowest cost.”
“We see AEye as the foremost innovator in this space, whose systems deliver highly precise, actionable information at speeds and distances never seen in commercially available LiDAR sensors,” said Huang Lee, Managing Partner at Taiwania. We look forward to working closely with AEye’s team to explore and pursue growth opportunities in this burgeoning space.”
AEye takes a disruptive approach to vehicle perception by putting intelligence at the sensor layer and making it extensible and controllable via a software driven architecture. The company’s iDAR system is an intelligent artificial perception system that physically fuses an agile, solid-state LiDAR with a hi-res camera to create a new data type called Dynamic Vixels with integrated software definable feedback control loops. This enables the iDAR sensor to also dynamically assess and prioritize what’s most relevant in a scene, then process this data at the edge. This is way above and beyond the function of legacy fixed pattern LiDAR systems and standalone cameras with 2D computer vision algorithms. This unique approach enables rapid, dynamic perception & path planning, for drastically improved autonomous vehicle safety and performance.
About AEye
AEye is an artificial perception pioneer and creator of iDAR™, a perception system that acts as the eyes and visual cortex of autonomous vehicles. Since its demonstration of its solid-state LiDAR scanner in 2013, AEye has pioneered breakthroughs in intelligent sensing. The company is based in the San Francisco Bay Area, and backed by world-renowned investors including Kleiner Perkins Caufield & Byers, Taiwania Capital, Intel Capital, & Airbus Ventures.
About Taiwania Capital
Taiwania Capital is a venture capital firm sponsored by the Taiwan government and large private enterprises. Founded in 2017, Taiwania Capital is focused on ICT-related sectors and startups in fields including: enterprise IT infrastructure and software, AI, IoT, network security, industrial automation, drones and robotics, next-gen semiconductors, autonomous vehicle technology, and digital devices. With offices in both Taiwan and Silicon Valley, Taiwania Capital exclusively backs startups that will turn the promises of technological advancement into scalable applications.
Media Contact:
AEye, Inc.
Jennifer Deitsch
[email protected]
925-400-4366
AEye’s iDAR Shatters Both Range and Scan Rate Performance Records for Automotive Grade LiDAR — AEye Introduces Groundbreaking iDAR TechnologyGartner Names AEye Cool Vendor in AI for Computer VisionThe Future of Autonomous Vehicles: Part I – Think Like a Robot, Perceive Like a HumanAEye Announces Addition of Aravind Ratnam as Vice President of Product ManagementAEye Announces the AE100 Robotic Perception System for Autonomous VehiclesAEye’s iDAR Leverages Biomimicry to Enable First Solid State Lidar with 100Hz Scan RateAutoSens Names AEye Most Exciting Start-Up in the Automotive Imaging SectorAEye Granted Foundational Patents For Core Solid-State MEMs-Based Agile LiDAR And Embedded AI TechnologyAEye Introduces Next Generation of Artificial Perception: New Dynamic Vixels™Nate Ramanathan Joins AEye as Vice President of Operations
TomTom navigation for motorcyclists now available on the BMW Motorrad Connected app
TomTom navigation for motorcyclists now available on the BMW Motorrad Connected app
MILAN, 07-Nov-2018 — /EuropaWire/ — TomTom (TOM2) today announced that BMW Motorrad owners can now experience the best of TomTom navigation for motorcyclists running on the BMW Motorrad Connected app. The smartphone app stays safely in the rider’s pocket, while visual directions are shown on the bike’s integrated handlebar display. Audio directions are provided via Bluetooth® into the rider’s compatible helmet.
Features have been motorcycle-optimized, with one of the most requested – the option to choose winding routes – being introduced.
The new functionality is available from today, with app users needing only to update their app, free of charge, before their next ride.
Antoine Saucier, Managing Director, TomTom Automotive, said: “The combination of TomTom’s maps, software and services provides a fantastic motorbike navigation experience for BMW Motorrad riders.”
TomTom’s navigation components are provided to BMW Motorrad via TomTom’s Navigation software, NavKit, alongside TomTom’s NDS maps, and services including TomTom Traffic, weather and Speed cameras.
TomTom is at EICMA 2018 – Pavilion 13, Booth N72.
The Bluetooth® word mark and logos are registered trademarks owned by Bluetooth SIG, Inc. and any use of such marks by TomTom is under license. Other trademarks and trade names are those of their respective owners.
ENDS
About TomTom
TomTom is the leading independent location technology specialist, shaping mobility with highly accurate maps, navigation software, real-time traffic information and services.
To achieve our vision of a safer world, free of congestion and emissions, we create innovative technologies that keep the world moving. By combining our extensive experience with leading business and technology partners, we power connected vehicles, smart mobility and, ultimately, autonomous driving.
Headquartered in Amsterdam with offices in 37 countries, TomTom’s technologies are trusted by hundreds of millions of people worldwide.
www.tomtom.com
SOURCE: TomTom International BV
MEDIA CONTACT
tomtom.pr@tomtom.com
+ 31 (0) 20 7574730
Tweet
Share
0
+1
0
Tower Signs a Memorandum of Understanding to Sell its European Operations at an Accretive Value
Tower Signs a Memorandum of Understanding to Sell its European Operations at an Accretive Value
LIVONIA, Mich., Nov. 20, 2018 /PRNewswire/ — Tower International, Inc. (NYSE: TOWR), a leading global manufacturer of engineered automotive structural metal components and assemblies, today announced it has signed a Memorandum of Understanding relating to the sale of all of its European Operations to Financière SNOP Dunois S.A.”FSD”, a privately owned French automotive supplier.
Tower's European Operations include manufacturing facilities in Belgium, Czech Republic, Germany, Italy, Poland and Slovakia and offices in Germany and Italy. Financial results for full year 2018 are projected at revenue of $650 million and Adjusted EBITDA of $55 million. Before fees and other customary adjustments, the anticipated sale price represent an Enterprise Value of €255 million ($298 million at $1.17/Euro) representing an enterprise multiple of 5.4 times Adjusted EBITDA. This transaction multiple is significantly higher than the present multiple for Tower's common stock, which Tower estimates was approximately 4.5 times based on yesterday's closing price.
“This accretive transaction with FSD allows Tower to focus on a North American business with strong organic growth, profit margins and cash flow. It further strengthens Tower's balance sheet and enhances Tower's financial flexibility and accelerates Tower's ability to invest in additional accretive growth, reduce leverage and/or return capital to Tower shareholders,” said CEO Jim Gouin. “FSD and Tower Europe's operations are very complementary from a customer as well as geographic footprint. This combination will allow Tower's assets and colleagues to be better utilized as part of this Pan-European entity.”
The memorandum of understanding signed by the parties together with an unsigned stock purchase agreement, would be the basis on which the parties pursue the signing of a definitive agreement in the next few weeks, once works council consultation has taken place. Completion of the divestiture is expected to take place during the first quarter of 2019 and is subject to approval of the applicable antitrust authorities and other customary conditions. Tower expects to recognize a book loss of approximately $60 million related to the sale of the European operations. This one-time charge will include the reclassification of currency translation into earnings, other fair value adjustments and selling costs.
For this transaction, Rothschild & Co. served as Tower's M&A advisor, Freshfields Bruckhaus Deringer LLP was Tower's legal advisor, and De Brauw Blackstone Westbroek advised Tower on country specific legal matters. Tower also received advisory services from J.P. Morgan Securities LLC.
Tower to Host Conference Call Today at 2 p.m. EST
Tower will discuss this transaction and other related matters in a conference call at 2 p.m. EST today. Participants may listen to the audio portion of the conference call either through a live audio webcast on the Company's website or by telephone. The slide presentation and webcast can be accessed via the investor relations portion of Tower's website www.towerinternational.com. To dial into the conference call, domestic callers should dial (866) 393-4576, international callers should dial (706) 679-1462. An audio recording of the call will be available approximately two hours after the completion of the call. To access this recording, please dial (855) 859-2056 (domestic) or (404) 537-3406 (international) and reference Conference I.D. #1976027. A webcast replay will also be available and may be accessed via Tower's website.
Non-GAAP Financial Measures
This press release includes the following non-GAAP financial measures: “Adjusted EBITDA”, “Adjusted EBITDA Margin”, “Free Cash Flow”, and “Net Debt.” We define Adjusted EBITDA as net income / (loss) before interest, taxes, depreciation, amortization, restructuring items and other adjustments described in the reconciliations provided in this presentation. Adjusted EBITDA margin represents Adjusted EBITDA divided by revenues. Free Cash Flow is defined as cash provided by operating activities less cash disbursed for purchases of property, plant and equipment. Net Debt represents total debt less cash and cash equivalents. We use Adjusted EBITDA, Adjusted EBITDA margin, Free Cash Flow, and Net Debt as supplements to information provided in accordance with generally accepted accounting principles (“GAAP”) in evaluating our business and they are included in this presentation because they are principal factors upon which our management assesses performance. The non-GAAP measures presented above are not measures of performance under GAAP. These measures should not be considered as alternatives for the most directly comparable financial measures calculated in accordance with GAAP. Other companies in our industry may define these non-GAAP measures differently than we do and, as a result, these non-GAAP measures may not be comparable to similarly titled measures used by other companies in our industry; and certain of our non-GAAP financial measures exclude financial information that some may consider important in evaluating our performance. Given the inherent uncertainty regarding mark to market adjustments of financial instruments, fair value adjustments to our pension plan, potential gain or loss on our discontinued operations, potential restructuring expenses, and expenses related to our long-term incentive compensation programs in any future period, a quantitative reconciliation of forward-looking financial measures to the most directly comparable financial measures calculated and presented in accordance with GAAP is not feasible. Consequently, any attempt to disclose such reconciliations would imply a degree of precision that could be confusing or misleading to investors. The magnitude of these items, however, may be significant.
Forward-Looking Statements and Risk Factors
This press release contains statements which constitute forward-looking statements, within the meaning of the Private Securities Litigation Reform Act of 1995, including but not limited to statements regarding the completion of the pending transactions in this presentation, the consequences of that transaction, projected enterprise value, anticipated stock valuation, positioning, projected truck revenues and the outlook for revenue, Adjusted EBITDA, Adjusted EBITDA Margin, Free Cash Flow, net new business, net debt and leverage. The forward-looking statements can be identified by words such as “anticipate,” “believe,” “plan,” “estimate,” “expect,” “intend,” “project,” “target,” and other similar expressions. Forward-looking statements are made as of the date of this presentation and are based upon management's current expectations and beliefs concerning future developments and their potential effects on us. Such forward-looking statements are not guarantees of future performance. The following important factors, as well as risk factors described in our reports filed with the SEC, could cause our actual results to differ materially from estimates or expectations reflected in such forward-looking statements:
global automobile production volumes;
the financial condition of our customers and suppliers;
our ability to make scheduled payments of principal or interest on our indebtedness and comply with the covenants and restrictions contained in the instruments governing our indebtedness;
our ability to refinance our indebtedness;
risks associated with our non-U.S. operations, including foreign exchange risks and economic uncertainty in some regions;
any increase in the expense and funding requirements of our pension and other postretirement benefits;
our customers' ability to obtain equity and debt financing for their businesses;
our dependence on our largest customers;
pricing pressure from our customers;
changes to U.S. trade and tariff policies and the reaction of other countries thereto;
work stoppages or other labor issues affecting us or our customers or suppliers;
our ability to integrate acquired businesses;
our ability to take advantage of emerging secular trends,
risks associated with business divestitures;
costs or liabilities relating to environmental and safety regulations;
our ability to close the pending transaction in accordance with anticipated terms; and
regulatory and other conditions that must be satisfied or, in certain circumstances, waived in order to consummate the pending transaction.
We do not assume any obligation to update or revise the forward-looking statements contained in this press release.
Contact:
Derek Fiebig
Executive Director, Investor & External Relations
(248) 675-6457
fiebig.derek@towerinternational.com
View original content:http://www.prnewswire.com/news-releases/tower-signs-a-memorandum-of-understanding-to-sell-its-european-operations-at-an-accretive-value-300753352.html
SOURCE Tower International, Inc.
AEye Sets New Benchmark for LiDAR Range
Introducing 1KM
AEye has set a new benchmark for LiDAR range. In performance specification tests monitored and validated by VSI Labs, AEye’s iDAR system acquired and tracked a truck at 1,000 meters – or one kilometer – five times the distance current LiDAR systems are able to detect!
Watch now:
AEye Sets New Benchmark for LiDAR Range — AEye’s iDAR Shatters Both Range and Scan Rate Performance Records for Automotive Grade LiDARAEye Introduces Groundbreaking iDAR TechnologyThe Future of Autonomous Vehicles: Part I – Think Like a Robot, Perceive Like a HumanAEye Announces the AE100 Robotic Perception System for Autonomous VehiclesGartner Names AEye Cool Vendor in AI for Computer VisionAEye Granted Foundational Patents For Core Solid-State MEMs-Based Agile LiDAR And Embedded AI TechnologyAEye Announces Addition of Aravind Ratnam as Vice President of Product ManagementAutoSens Names AEye Most Exciting Start-Up in the Automotive Imaging SectorAEye Introduces Next Generation of Artificial Perception: New Dynamic Vixels™AEye’s iDAR Leverages Biomimicry to Enable First Solid State Lidar with 100Hz Scan Rate
Milestone: ŠKODA AUTO’s Mladá Boleslav plant built its seven millionth MQ 200 gearbox since the plant began manufacturing the transmissions
Milestone: ŠKODA AUTO’s Mladá Boleslav plant built its seven millionth MQ 200 gearbox since the plant began manufacturing the transmissions
ŠKODA AUTO is currently investing 65 million euros in stepping up gearbox production
ŠKODA AUTO’s manufacturing of transmissions plays a key role in Volkswagen Group’s production network
Gearbox production is consistently centred on the principles of Industry 4.0
MLADÁ BOLESLAV, 13-Nov-2018 — /EuropaWire/ — ŠKODA AUTO has reached yet another milestone in component production: today, ŠKODA AUTO’s Mladá Boleslav plant built its seven millionth MQ 200 gearbox since the company’s main plant began manufacturing the transmissions in 2000. The Czech car manufacturer makes gearboxes for its own cars as well as for models from other Volkswagen Group brands at its Mladá Boleslav and Vrchlabí plants. Nowadays, gearbox manufacturing follows the principles of Industry 4.0. ŠKODA AUTO is for example focusing on state-of-the-art technologies at both plants to make workspaces more ergonomic and to assist staff.
Michael Oeljeklaus, ŠKODA AUTO Board Member for Production and Logistics, stressed, “The gearboxes made at ŠKODA AUTO demonstrate their high level of quality and manufacturing precision every day, and reliably do their jobs in millions of vehicles. The fact that we have produced seven million MQ 200 transmissions since the start of production in 2000 is convincing proof of the amount of trust that our customers place in our components.”
The manual five- or six-speed MQ 200 gearboxes are designed for engines that deliver torque of up to 200 Nm. ŠKODA AUTO currently manufactures 1,500 units per day on two production lines in Mladá Boleslav. The gearbox comes in no less than 50 different configurations, which are installed in models from various Volkswagen Group brands.
The product portfolio at ŠKODA AUTO currently comprises three types of transmission: in addition to the MQ 200, MQ/SQ 100 gearboxes are also built at the main plant in Mladá Boleslav; ŠKODA AUTO has been producing DQ 200 automatic direct-shift transmissions at its Vrchlabí plant since 2012.
The total production figure for all transmission types made at the Mladá Boleslav and Vrchlabí plants per day is approximately 4,800. To date, ŠKODA AUTO has already produced well over 10 million gearboxes at both plants combined.
ŠKODA AUTO’s manufacturing of transmissions plays a key role in Volkswagen Group’s global production network. Over the course of 2018 and 2019, ŠKODA AUTO is investing more than 65 million euros in gearbox manufacturing in Mladá Boleslav to increase the production capacity of MQ 200 transmissions. Furthermore, in recent years ŠKODA AUTO has invested more than eight million euros in a new test stand area for gearboxes.
Production principles have radically changed since gearbox production began in 2000; nowadays, innovations from Industry 4.0 are used. For example, modern software has since replaced the paperwork originally used in shopfloor management at the main plant in Mladá Boleslav.
12 KUKA robots currently assist staff with assembly by inserting screws or filling the gearboxes with oil, for example.
With its digital shopfloor management and collaboration between employees and robots, ŠKODA AUTO is pressing ahead with the digitalisation of its production – a key pillar of its 2025 Strategy.
SOURCE: ŠKODA AUTO a.s.
MEDIA CONTACT
Jens Katemann
Head of Communications
e: jens.katemann@skoda-auto.cz
t: +420 326 811 880
Tweet
Share
0
+1
0
Montupet UK Limited
Home
About us
The Group
Financials Informations
From 1894 up to now
Contacts
Strategy
Chairman of the board
Company Philosophy
Quality
Environment
Activities
Customers and Products
Advanced Technology
News
MONTUPET is proud of FORD and BMW-PSA successes
MONTUPET presence in India
Montupet UK Limited
Signature of two major contracts with Daimler
Locations
Headquarters
France, Laigneville
France, Châteauroux
Spain, Zaragoza
Bulgaria, Ruse
Mexico, Torreon
Northern Ireland, Belfast
Your career
MONTUPET is proud of FORD and BMW-PSA successes
MONTUPET presence in India
Signature of two major contracts with Daimler
Montupet UK Limited
Montupet UK Limited, a subsidiary of Montupet SA has received an Invest NI Grant for Research and Development, supporting company innovation in services, products and processes.
Part financed by the Investment for Growth and Jobs Programme for NorthernIreland co financed by the European Regional Development Fund.
The funding for the project during the period 2014 and 2015 in relation to Panther Cylinder Head Development assisted the industrialisation and introduction of a new product using pioneering manufacturing techniques.
MONTUPET
202, quai de Clichy
BP77 – 92112 Clichy cedex France
Telephone:+33 (0)1 47 56 47 56
Fax: +33 (0)1 47 39 77 93
News
MONTUPET is proud of FORD and BMW-PSA successes
MONTUPET presence in India
Signature of two major contracts with Daimler
Communauto receives City of Toronto’s first permit for free-floating car-share pilot
Communauto FLEX Toronto will begin operations in November 2018
Canada’s longest-running car-sharing company Communauto will begin operating Toronto’s first free-floating car-share project under the City of Toronto’s new pilot program. The service, Communauto FLEX, plans to have more than 500 cars on the ground covering 100 square kilometres in Toronto.
During the first phase of the pilot in November 2018, Communauto FLEX will operate in the downtown core. Even more, 200 cars servicing 50 square kilometres will be available. Communauto president M. Benoit Robert received the City’s first permit from the Mayor of Toronto and Barbara Gray, the City of Toronto’s General Manager of Transportation. The event toke place during a press conference on October 9.
“Car-sharing technologies have the power to change how people go about their day-to-day lives and get around this city. I’ve encouraged the introduction of these new technologies and believe that there can be many benefits, including potentially reducing traffic and congestion by removing cars from the road,” said Mayor John Tory. “I’m proud we’ve worked to strike a balance between the benefits of car-sharing and the potential impacts it could have on neighbourhoods. I’m thrilled that the team at Communauto is among the first to join the pilot and operate here within these new regulations.”
Approved in April 2018
The Free-Floating Car-Share pilot project was approved by the City of Toronto in April 2018. Free-floating car-sharing allows members to take one-way trips beginning in one location and ending in another. This service enables drivers to pick up cars from and drop off cars to legal residential parking spaces throughout the city, rather than being confined to a fixed location.
Communauto FLEX has no monthly fees and is free to join, using a pay-as-you-go structure. Daily trips begin at $0.41/minute, $15/hour, $50/day and $35 is charged for any days following. The first 150km is included in the price of each trip.
During launch, users will receive the first 30 minutes of each trip free for one month after joining.
Torontonians can pick up Communauto FLEX cars from and drop them off at legal on-street parking spaces within the service area, and drive anywhere they want to go in between.
CEO Benoit Robert wishes to add to the existing transit options
“As a Canadian company, we’re proud to be the first to bring a free-floating car service to Toronto under this pilot. Our FLEX service will make car-sharing easy and affordable for city residents, and our goal is to work together with Toronto’s existing transit options to provide better solutions to mobility issues,” said Benoit Robert, Communauto president. “Car-sharing offers a number of social and environmental benefits to individuals and businesses. We are looking forward to working with the city, our users and other community partners to create thriving mobility solution.”
Free-float zone
Communauto FLEX will launch in Toronto with 200 Hyundai Accent hatchbacks. The free-floating car-share will service an area that will cover close to 50 square kilometres in the downtown core. It’s parking zone will span to High Park and Runnymede in the west, Dupont and Danforth in the north, and Victoria Park in the east. Excluded from the service area at launch: metered parking spaces and Green P lots.
If the pilot project is approved after the first 18 months of operations, Communauto FLEX will expand to have more than 500 cars on the road and will grow its service area to cover 100 square kilometres, serving residents up to Eglinton Ave.
How it works
Communauto FLEX offers Torontonians 24/7 access to one-way carsharing. Therefor, it is a simple way to improve their mobility inside and outside the city.
Registered users can locate and reserve cars using the Communauto FLEX mobile App. Doors can also be unlocked and locked using a smartphone or a membership card.
The first 30 minutes of booking are free.
This timelaps gives users enough time to reach the vehicle. Even more, drivers can stop and start their journey at any time.
To pre-register for Communauto FLEX, please visit: https://toronto.communauto.com/
More information on Toronto’s Free-Floating Car-Share pilot project is available at http://bit.ly/FFCarSharePilot2018
About Communauto
Founded in Quebec City in 1994, Communauto is the oldest car-sharing service in North America and the one serving the largest number of communities in Canada. With a fleet of more than 2,000 vehicles, the company serves 13 cities each one with a local approach. Among these Edmonton, Waterloo Region, Hamilton, London, Guelph, Kingston, Ottawa, Gatineau, Montréal/Laval/ Longueuil, Quebec City/Lévis, Sherbrooke, Halifax and Paris in France. Communauto group operates gas, hybrids and electric cars and offers both round-trip and free-floating carsharing.