Connect with us

Featured

Navigating The Soon-To-Release Honda E

Published

on

As part of Honda’s vision to have electric or hybrid versions of its core European models by 2022 and to have two-thirds of its vehicles electrified by 2030, the Honda E will be eyeing its urban markets with a synonymous price point to the Renault Zoe and 40 kWh Leaf. This stylish, higher-tech EV rivals other EVs with its infotainment options housed in an original Civic-inspired body, however only running with a smaller 35.5 kWh battery, which Honda defends is apt enough for urban driving while keeping the car’s weight, efficiency, and sportiness.

Honda personalized its Urban-inspired concept with more rounded head and tail lights and a higher-riding seating. It kept intact the pop-out door handles and cameras-as-mirrors in smoothing out the E’s aerodynamics.

The Honda E’s charging port is located atop the hood, covered, which you can pop with a remote or your own phone. Its trunk is spacious enough for a few grocery bags or for short weekend getaways. Its bold interior is adorned with furniture fabrics and faux wood accents, and comes with an HDMI input jack to plug in your Chromecast dongles.

“Our interior designer wanted to create a space that’s like a living room, with a sofa and TV,” Takahiro Shinya, head of dynamic performance for the Honda E, noted. “That’s to ensure that this car is not only comfortable for when you’re driving, but also when you’re charging. We wanted it to let people use it almost as a private room.”

The Honda E will be unveiled in the UK in two models: the E and the E Advance. Both come with 232-foot-pounds of torque, weigh 3,086 pounds, and are rear-wheel drive vehicles, with the base E model packing a 134-horsepower electric motor and the Advance with 152 horsepower. Comparing the E to Renault’s rival Zoe, the E weighs less yet has more torque than Zoe’s 180-foot-pounds.

The E’s dashboard has two 6-inch side screens for the mirrors, an 8.8-inch driver info display, and two 12.3-inch touch-screens at the center for the driver and the shotgun passenger.

Putting the wheels in motion

On a 60-mile trip on varying Valencia landscapes, both wet and dry, the Honda E showed exceptional performance even after quite an overwhelming glance at its infotainment systems in the beginning. The infotainment ergonomics of the Honda E allows you toggle physical buttons located on the steering wheel and dash if you choose to operate old-school instead of depending on touch displays. A classic volume knob can also be found in the middle of the console.

Entertainment wise, the E’s navigation app supports both Android Auto and Apple’s CarPlay while still giving you access to various apps like Honda’s Aha radio.  Additionally, quite reminiscent of Sony’s Vision-S concept car at CES 2020, the Honda E’s screen-swapping feature between the driver and the passenger incorporates more convenience.

While the features are already impressive, the drive goes on accelerations that fit city scooting and highway driving. The E has an independent suspension with MacPherson struts on each wheel and a perfect 50:50 weight distribution that allows it to corner with minimal body lean. Also, its 14.1-feet radius gives it a very tight turning circle that can outturn the Fiat 500 and most small cars.

The E’s braking energy recovery system also lets you have maximum control. The center console’s button enables the single pedal control that brings the car to a complete stop when you lift off the gas. The side paddles then help you control the level of energy recovery, from minimal to aggressive braking.

The side cameras give a clearer view while also reducing blind spots. The problem, however, is if the electronics malfunction, you’re left with a blank, unusable screen – far worse than a broken mirror. The rearview mirror, on the other hand, is backed by a physical, regular mirror other than its rear-mounted camera.

The Honda E’s intelligent driving system is powered by Honda’s sensing tech that uses radar and high resolution wide-angle cameras, that in the event of a far lean to an edge, the road departure mitigation system lets the steering wheel nudge the vehicle back into place. Akin to the Civic and other recent Honda models, the automatic braking also minimizes collisions with pedestrians and cars with its adaptive cruise control, road sign detection, lane keep assist, automatic high beams and more. Being the high-tech car that it is, the E also has a Parking Pilot that lets you select a desired vacancy and automatically parks into parallel, diagonal, lined, or parking garage spaces. If it goes amiss, the brake can stop and resume the process.

On the tradeoff, the Honda E only has the WLTP electric range at 137 miles on a single charge, down to a 20 percent battery life after the 60-mile cross-country trip. It, however, supports up to 100 kW chargers, with such juicing the E from zero to 80 percent in 30 minutes, and more common fast chargers, such as a 50 kW, taking merely a couple minutes longer.

The Verdict

Although the Honda E sadly falls short in terms of range and battery size as compared to its Renault Zoe competitor with a 50 kWh battery (and a 242-mile WLPT range), Shinya believes that urban buyers won’t find the range as a huge factor that matters for commutes.

The Honda E will sell at a base price of £26,160 ($34,200) with the E Advance starting at £28,660 ($37,500), including the £3,500 government rebate. It arguably is more expensive than the Renault Zoe at £25,670 or around $33,600 (including a £3,500 rebate) but cheaper than the 40 kWh Leaf at £26,345 ($34,400).

Shinya defended when talking about the E’s design, “We needed to provide buyers with a vehicle that, at a glance, is something different,” he said. “We don’t want you to feel like you just have a different motor, but that you have bought something which is completely new, completely ‘next-generation’.”

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Featured

What Is REAL Messenger?

Published

on

REAL Messenger is an app designed to help agents promote their listings and themselves.

Social media is table stakes now. But not all apps are created equal. Facebook is for friends. Instagram is for interests and LinkedIn is for work connections. So why shouldn’t real estate has its own social media platform that brings together a global community of agents, buyers, and sellers?

We’ve learned that there’s power in a platform to find what you’re looking for, to share, and to chat — opening doors and elevating your presence. We’ve watched how people become influencers with followers who devour every post. These influencers don’t have to pay to promote themselves, they’ve built their audience through content, personality, and what they stand for.

In contrast, real estate has become “pay-to-play,” restricting agents from showcasing their listings on well-known real estate platforms, unless they pay (a lot!) to promote them. Agents have lost control in the real estate process, while big proptech profits from agents’ hard-earned listings.

Social media meets real estate

Imagine a social media outlet just for real estate — one like Instagram or WhatsApp geared 100% to our industry. The audience is engaged in real estate. Agents connect to share information about listings with one another or potential buyers and sellers – and retain those connections. Agents can even share their knowledge of properties before they are listed publicly.

It’s now all possible with the REAL Messenger app, an incredibly fast social media platform for real estate agents to promote their listings and share their styles and specialties, as well as their sales history and approach. Integrated into the app is an easy chat feature that replaces the need for cold calls, excessive emails, and online ads that don’t yield much return on investment.

Giving agents back control

From the agents’ perspective, sites and apps like Zillow are taking listing information from MLS agreements, repackaging it to promote it on their sites, then selling the information back to the agents who owned it in the first place! Agents end up paying these sites expensive advertising fees. And while many agents use Instagram to let their followers know about their listings, they are not really targeting a real estate-specific audience. WhatsApp is also used for secure, data-encrypted conversations ensuring quick exchanges of information. But how can an agent build their business when they’re promoting to people who are not in the market? Our formidable team of developers created the best of all worlds for the world of real estate — it’s like Instagram with a secure chat feature similar to that of WhatsApp.

The REAL advantage

Self-branding and inbound marketing is built into REAL. Agents brand themselves by creating content that showcases their listings, providing information buyers will need, and sharing their successes with transactions. Agents use the app’s three-point rating system to describe their listing and other important characteristics.

Potential buyers can search for anything specific to their interests (i.e., a home with a patio, a garden, or a swimming pool) in particular zip codes. They can also browse by scrolling through the listings to find the hottest, most popular real estate properties in their areas. These potential buyers can follow agents whose posts resonate with their preferences and interests, expanding agents’ networks.

Continue Reading

Featured

Deepfakes Explained

Published

on

Deepfakes are fake videos created using digital software, machine learning, and face swapping. Deepfakes are computer-created artificial videos in which images are combined to create new footage that depicts events, statements, or actions that never actually happened. The results can be quite convincing. Deep fakes differ from other forms of false information by being very difficult to identify as false.

How do deepfakes work?

The basic concept behind the technology is facial recognition, users of Snapchat will be familiar with the face swap or filter functions which apply transformations or augment their facial features. Deep Fakes are similar but much more realistic. Fake videos can be created using a machine learning technique called a “generative adversarial network” or GAN. For example, a GAN can look at thousands of photos of Beyonce and produce a new image that approximates those photos without being an exact copy of any one of the photos. GAN can be used to generate new audio from existing audio, or new text from the existing text – it is a multi-use technology. The technology used to create Deep Fakes is programmed to map faces according to “landmark” points. These are features like the corners of your eyes and mouth, your nostrils, and the contour of your jawline.

When seeing is no longer believing

While the technology used to create deep fakes is a relatively new technology, it is advancing quickly and it is becoming more and more difficult to check if a video is real or not.  Developments in these kinds of technologies have obvious social, moral, and political implications. There are already issues around news sources and the credibility of stories online, deep fakes have the potential to exacerbate the problem of false information online or disrupt and undermine the credibility of and trust in news, and information in general.

The real potential danger of false information and deep fake technology is creating mistrust or apathy in people about what we see or hear online. If everything could be fake does that mean that nothing is real anymore? For as long as we have had photographs and video and audio footage they have helped us learn about our past and shaped how we see and know things. Some people already question the facts around events that unquestionably happened, like the Holocaust, the moon landing, and 9/11, despite video proof. If deepfakes make people believe they can’t trust video, the problems of false information and conspiracy theories could get worse.

False news can lead to false memories

One of the most common concerns and potential dangers of deep fakes and false information, in general, is the impact it can have on democratic processes and elections.

A recent survey from UCC confirmed that people recall fake news more than real news. The results of the survey indicated that voters may form false memories after seeing fabricated news stories, especially if those stories align with their political beliefs, according to a new study. The researchers suggest the findings indicate how voters may be influenced in upcoming political contests, like the 2020 US presidential race.

The author of the report Dr. Gillian Murphy added; “This demonstrates the ease with which we can plant these entirely fabricated memories, despite this voter suspicion and even despite an explicit warning that they may have been shown fake news,”.

Continue Reading

Featured

What Is Cognitive Computing?

Published

on

Cognitive computing is the use of computerized models to simulate the human thought process in complex situations where the answers may be ambiguous and uncertain. The phrase is closely associated with IBM’s cognitive computer system, Watson.

Computers are faster than humans at processing and calculating, but they have yet to master some tasks, such as understanding natural language and recognizing objects in an image. Cognitive computing is an attempt to have computers mimic the way a human brain works.

To accomplish this, cognitive computing makes use of artificial intelligence (AI) and other underlying technologies, including the following:

  • Expert systems
  • Neural networks
  • Machine learning
  • Deep learning
  • Natural language processing (NLP)
  • Speech recognition
  • Object recognition
  • Robotics

Cognitive computing uses these processes in conjunction with self-learning algorithms, data analysis, and pattern recognition to teach computing systems. The learning technology can be used for speech recognition, sentiment analysis, risk assessments, face detection, and more. In addition, it is particularly useful in fields such as healthcare, banking, finance, and retail.

How Does Cognitive Computing Work?

Systems used in the cognitive sciences combine data from various sources while weighing context and conflicting evidence to suggest the best possible answers. To achieve this, cognitive systems include self-learning technologies that use data mining, pattern recognition, and NLP to mimic human intelligence.

Using computer systems to solve the types of problems that humans are typically tasked with requires vast amounts of structured and unstructured data fed to machine learning algorithms. Over time, cognitive systems are able to refine the way they identify patterns and the way they process data. They become capable of anticipating new problems and modeling possible solutions.

For example, by storing thousands of pictures of dogs in a database, an AI system can be taught how to identify pictures of dogs. The more data a system is exposed to, the more it is able to learn and the more accurate it becomes over time.

To achieve those capabilities, cognitive computing systems must have the following attributes:

  • Adaptive. These systems must be flexible enough to learn as information changes and as goals evolve. They must digest dynamic data in real time and adjust as the data and environment change.
  • Interactive. Human-computer interaction is a critical component of cognitive systems. Users must be able to interact with cognitive machines and define their needs as those needs change. The technologies must also be able to interact with other processors, devices, and cloud platforms.
  • Iterative and stateful. Cognitive computing technologies can ask questions and pull in additional data to identify or clarify a problem. They must be stateful in that they keep information about similar situations that have previously occurred.
  • Contextual. Understanding context is critical in thought processes. Cognitive systems must understand, identify and mine contextual data, such as syntax, time, location, domain, requirements, and a user’s profile, tasks, and goals. The systems may draw on multiple sources of information, including structured and unstructured data and visual, auditory, and sensor data.

Examples and applications of cognitive computing

Cognitive computing systems are typically used to accomplish tasks that require the parsing of large amounts of data. For example, in computer science, cognitive computing aids in big data analytics, identifying trends and patterns, understanding human language, and interacting with customers.

Continue Reading
Advertisement

Facebook

Trending