Jump to content

Ford commits to fully autonomous vehicles within five years


Recommended Posts

I have seen it called the arrogance of ignorance

Mark Twain said it this way: “It's not what you don't know that kills you, it's what you know for sure that ain't true.”

 

https://newrepublic.com/minutes/126677/it-aint-dont-know-gets-trouble-must-big-short-opens-fake-mark-twain-quote

 

It’s a perfect quote for the film, only it does not appear in any of Twain’s books, essays, letters, or speeches. Because Mark Twain never wrote it or said it or anything like it.

 

Link to comment
Share on other sites

 

What's missing from that link--and all of the others--is a citation showing when and where Twain said that.

 

And that's because he never said it. The cadence is wrong, the use of the language is wrong. It smells wrong; it doesn't look like something remotely characteristic of Twain. It's like this popular item:

 

madhatter2--a.jpg

 

It looks like it was cribbed right out of the book doesn't it? It's got the olde tyme font and everything.

 

But it's flat out not in the book. And not only is it not in the book, the usage and worldview are so anachronistic that it isn't even plausibly in the book. It's from the 2010 movie.

 

--

 

And the takeaway from all of that is pretty much the point you were trying to make. I know what the issues are with autonomous driving. I know enough linguistics to be able to explain in fundamental terms the language processing necessary to drive. I know the limitations of machine learning as it pertains to language comprehension. I know enough philosophy to be able to explain in simple terms the fundamental difference between human consciousness and input processing.

 

I know that the difference between driver assistive technology and self-driving cars is comparable to the difference between an x-ray machine and a radiologist.

 

You, on the other hand, seem to be content with what you conclude based on what you've read on the internet.

Edited by RichardJensen
  • Like 1
Link to comment
Share on other sites

I concede MT 'may' not have been the author which matters little from my POV, but as with most of your arguments relevance and context are ???, I have a little interest in who was 1st to say it but much more interest in what was said. In this case what was said fits lots of arguments made on BON.

I simply try to keep an open mind about issues..

have fun in your cocoon...

  • Like 1
Link to comment
Share on other sites

I simply try to keep an open mind about issues..

 

There's a difference between keeping an open mind and choosing to remain uneducated.

 

The challenges of creating an AI capable of driving do not disappear because you choose to ignore them anymore than your electrical bill will disappear if you choose not to pay it.

 

I would also point out that insisting that self-driving cars are coming is not 'keeping an open mind'. It's choosing a position and advocating it regardless of whether or not you understand it.

Edited by RichardJensen
Link to comment
Share on other sites

this guy is an idiot right

 

 

 

Ford Targets Fully Autonomous Vehicle for Ride Sharing in 2021; Invests in New Tech Companies, Doubles Silicon Valley Team
  • Ford announces intention to deliver high-volume, fully autonomous vehicle for ride sharing in 2021
  • Ford investing in or collaborating with four startups on autonomous vehicle development
  • Company also doubling Silicon Valley team and more than doubling Palo Alto campus
 

PALO ALTO, Calif., Aug. 16, 2016 – Ford today announces its intent to have a high-volume, fully autonomous SAE level 4-capable vehicle in commercial operation in 2021 in a ride-hailing or ride-sharing service.

To get there, the company is investing in or collaborating with four startups to enhance its autonomous vehicle development, doubling its Silicon Valley team and more than doubling its Palo Alto campus.

“The next decade will be defined by automation of the automobile, and we see autonomous vehicles as having as significant an impact on society as Ford’s moving assembly line did 100 years ago,” said Mark Fields, Ford president and CEO. “We’re dedicated to putting on the road an autonomous vehicle that can improve safety and solve social and environmental challenges for millions of people – not just those who can afford luxury vehicles.”

Autonomous vehicles in 2021 are part of Ford Smart Mobility, the company’s plan to be a leader in autonomous vehicles, as well as in connectivity, mobility, the customer experience, and data and analytics.

Driving autonomous vehicle leadership
Building on more than a decade of autonomous vehicle research and development, Ford’s first fully autonomous vehicle will be a Society of Automotive Engineers-defined level 4-capable vehicle. Plans are to design it to operate without a steering wheel, gas or brake pedal, for use in commercial mobility services such as ride sharing and ride hailing within geo-fenced areas and be available in high volumes.

“Ford has been developing and testing autonomous vehicles for more than 10 years,” said Raj Nair, Ford executive vice president, Global Product Development, and chief technical officer. “We have a strategic advantage because of our ability to combine the software and sensing technology with the sophisticated engineering necessary to manufacture high-quality vehicles. That is what it takes to make autonomous vehicles a reality for millions of people around the world.”

This year, Ford will triple its autonomous vehicle test fleet to be the largest test fleet of any automaker – bringing the number to about 30 self-driving Fusion Hybrid sedans on the roads in California, Arizona and Michigan, with plans to triple it again next year.

Ford was the first automaker to begin testing its vehicles at Mcity, University of Michigan’s simulated urban environment, the first automaker to publicly demonstrate autonomous vehicle operation in the snow and the first automaker to test its autonomous research vehicles at night, in complete darkness, as part of LiDAR sensor development.

To deliver an autonomous vehicle in 2021, Ford is announcing four key investments and collaborations that are expanding its strong research in advanced algorithms, 3D mapping, LiDAR, and radar and camera sensors:

  • Velodyne: Ford has invested in Velodyne, the Silicon Valley-based leader in light detection and ranging (LiDAR) sensors. The aim is to quickly mass-produce a more affordable automotive LiDAR sensor. Ford has a longstanding relationship with Velodyne, and was among the first to use LiDAR for both high-resolution mapping and autonomous driving beginning more than 10 years ago
  • SAIPS: Ford has acquired the Israel-based computer vision and machine learning company to further strengthen its expertise in artificial intelligence and enhance computer vision. SAIPS has developed algorithmic solutions in image and video processing, deep learning, signal processing and classification. This expertise will help Ford autonomous vehicles learn and adapt to the surroundings of their environment
  • Nirenberg Neuroscience LLC: Ford has an exclusive licensing agreement with Nirenberg Neuroscience, a machine vision company founded by neuroscientist Dr. Sheila Nirenberg, who cracked the neural code the eye uses to transmit visual information to the brain. This has led to a powerful machine vision platform for performing navigation, object recognition, facial recognition and other functions, with many potential applications. For example, it is already being applied by Dr. Nirenberg to develop a device for restoring sight to patients with degenerative diseases of the retina. Ford’s partnership with Nirenberg Neuroscience will help bring humanlike intelligence to the machine learning modules of its autonomous vehicle virtual driver system
  • Civil Maps: Ford has invested in Berkeley, California-based Civil Maps to further develop high-resolution 3D mapping capabilities. Civil Maps has pioneered an innovative 3D mapping technique that is scalable and more efficient than existing processes. This provides Ford another way to develop high-resolution 3D maps of autonomous vehicle environments

Silicon Valley expansion
Ford also is expanding its Silicon Valley operations, creating a dedicated campus in Palo Alto.

Adding two new buildings and 150,000 square feet of work and lab space adjacent to the current Research and Innovation Center, the expanded campus grows the company’s local footprint and supports plans to double the size of the Palo Alto team by the end of 2017.

“Our presence in Silicon Valley has been integral to accelerating our learning and deliverables driving Ford Smart Mobility,” said Ken Washington, Ford vice president, Research and Advanced Engineering. “Our goal was to become a member of the community. Today, we are actively working with more than 40 startups, and have developed a strong collaboration with many incubators, allowing us to accelerate development of technologies and services.”

Since the new Ford Research and Innovation Center Palo Alto opened in January 2015, the facility has rapidly grown to be one of the largest automotive manufacturer research centers in the region. Today, it is home to more than 130 researchers, engineers and scientists, who are increasing Ford’s collaboration with the Silicon Valley ecosystem.

Research and Innovation Center Palo Alto’s multi-disciplinary research and innovation facility is the newest of nearly a dozen of Ford’s global research, innovation, IT and engineering centers. The expanded Palo Alto campus opens in mid-2017.

Link to comment
Share on other sites

I think that's a press release.

 

Post links to sources, Biker. You have been told to do this on MULTIPLE occasions.

 

And this is one of the stupidest things Ford has thrown money at in a long time.

 

But, sure, since understanding the limitations of AI requires a deep understanding of humanities, then the limitations don't exist at all.

 

The belief that throwing money at problems that are deeply, intractably, tied to an inaccurate conception of the way the mind works and interacts with its environment, is hardly new, and its record of failure is longstanding and total.

 

Oh, I can hear a few people talking about "but a computer beat a grand master in chess!" -- yes: Such an outcome was hardly unforeseeable. The guy that invented Unix was writing software that could beat masters in Chess in the early 80s. Chess is a poor surrogate for human ability, confined as it is to a limited and calculable problem set with clearly defined boundaries. Chess is susceptible to brute force solutions. Driving is not.

Link to comment
Share on other sites

There is nothing wrong with research and development in this area since most of the technology would be applicable as driver aids. I do think driverless vehicles will be viable in certain controlled environments and specific use cases.

 

And since there is a large segment of consumers who want it and think it's viable it's a good PR move.

Link to comment
Share on other sites

There is a clear inability of engineers working on self-driving cars to recognize the limitations of technology.

 

Hubert Dreyfus, guys, explained decades ago the fundamental limitations of logic gate based computing (and it's all logic gate based computing) as a simulacrum of consciousness.

 

The problem, in brief, is this:

 

The rules of the road are unquestionably a language. A dashed yellow line means something. A solid white line means something. These meanings vary from country to country and exist as conventions, in the same way that, for instance, lion, has meaning in English. Granted, the rules of the road are neither as expressive nor as nuanced as spoken language but--and this is the key point--they are apprehended in the same fashion.

 

Human beings are able to follow the rules of the road because they already have language acquisition skills.

 

More importantly, language is a component in the larger and preexisting realm of consciousness. Language enables shared experience and communication between individuals who have their own isolated consciousnesses. I see a lion, I tell you, "look at that lion over there" and you understand those words within your own consciousness. You recognize the "lion" from within your field of vision because the two of us have a more or less shared understanding of what that word means within the larger conscious framework.

 

How that relates to the rules of the road is that if you have two individuals driving toward each other on a street with a dashed yellow line in the middle, both drivers (in the US, Canada, etc.) understand the meaning of that dashed yellow line: 'keep right, oncoming traffic to the left'. It's language: it's an arbitrary sign that communicates a clear message to more than one individual.

 

Now, here is where the problem comes in for AI.

 

Imagine that I see a lion. Instead of saying, "look at that lion", I say, "LOOK!"

 

You, almost certainly, will focus your attention on the lion.

 

Why?

 

Because language exists within a larger consciousness. We do not process our environment in language, we extract language from our environment. We are--to quote Heidegger--always already in our environment and we do not need language to understand it. We understand that, in this particular hypothetical instance, a lion is something noteworthy, and we don't need to be told explicitly to look at it.

 

How that applies to the limitations of AI and self-driving cars:

 

Imagine that you are driving down an unmarked road, with no oncoming traffic.

 

"Oh. That's easy. Just bear right."

 

But what if it's a two-lane one-way?

 

"Oh. Well it should be pre-programmed to identify that it's on a street that's a two-lane one-way"

 

What if it's a two lane one-way at certain times of the day and a two way street at other times? What if it's a two-lane one-way that is only a one-lane one-way during part of the day due to allowed parking on one side of the street? (all of these are common instances in urban environments)

 

"Oh. Well that should also be pre-programmed into the system."

 

By whom? And at what cost? And to what degree of reliability? Who, in short, is responsible for all of the time and money required to provide the sanitized and unambiguous data that a computer needs to mimic--in this small instance--the remarkable ability of the human mind to infer and interpret?

 

And then who is responsible when the computer inevitably fails 'ungracefully' when presented with conflicting data (e.g. ambiguous street markings due to construction)

Edited by RichardJensen
  • Like 1
Link to comment
Share on other sites

I am sure in 25-50-75 years

 

That would require immaculate transportation network maintenance, as compared to the current situation, and a wholesale elimination of people-driven vehicles.

 

Otherwise, you're dealing with exactly the scenarios I outlined above: the frequent occurrence of situations that must be interpreted, and computers are horrible at interpretation and not getting better at it.

Link to comment
Share on other sites

 

That would require immaculate transportation network maintenance, as compared to the current situation, and a wholesale elimination of people-driven vehicles.

 

Otherwise, you're dealing with exactly the scenarios I outlined above: the frequent occurrence of situations that must be interpreted, and computers are horrible at interpretation and not getting better at it.

 

The cost of the required infrastructure upgrades to make this all possible are beyond staggering. That alone will stymie this boondoggle.

Link to comment
Share on other sites

 

That would require immaculate transportation network maintenance, as compared to the current situation, and a wholesale elimination of people-driven vehicles.

 

Otherwise, you're dealing with exactly the scenarios I outlined above: the frequent occurrence of situations that must be interpreted, and computers are horrible at interpretation and not getting better at it.

 

How things go wrong with modern technology and people...Delta A320 lands at Ellsworth Air Force Base instead of Rapid City Airport. Modern Airliner, satellite navigation, air traffic control, modern radar, airport navigation systems, visual?????

Link to comment
Share on other sites

I'm still waiting for someone, anyone, to demonstrate what exactly this is needed

 

 

The justification (which I don't necessarily buy) is that it will reduce accidents and fatalities dramatically, reduce emissions and improve fuel economy.

 

It will also solve world hunger, find Jimmy Hoffa's body and eliminate the Zika virus completely.

Link to comment
Share on other sites

 

How things go wrong with modern technology and people...Delta A320 lands at Ellsworth Air Force Base instead of Rapid City Airport. Modern Airliner, satellite navigation, air traffic control, modern radar, airport navigation systems, visual?????

 

In fairness, Ellsworth's runway is bigger and a bit closer to Rapid than the Rapid City airport.... https://binged.it/2ccKe3w

 

But, yes, my point is more or less exactly your point.

Link to comment
Share on other sites

 

 

The justification (which I don't necessarily buy) is that it will reduce accidents and fatalities dramatically, reduce emissions and improve fuel economy.

 

It will also solve world hunger, find Jimmy Hoffa's body and eliminate the Zika virus completely.

I've seen all of that with no real evidence to back it up. Usually the point is just repeated only louder and with a few fancier words added to make it seem like it's different.

Link to comment
Share on other sites

I think that's a press release.

 

Post links to sources, Biker. You have been told to do this on MULTIPLE occasions.

 

And this is one of the stupidest things Ford has thrown money at in a long time.

 

But, sure, since understanding the limitations of AI requires a deep understanding of humanities, then the limitations don't exist at all.

 

The belief that throwing money at problems that are deeply, intractably, tied to an inaccurate conception of the way the mind works and interacts with its environment, is hardly new, and its record of failure is longstanding and total.

 

Oh, I can hear a few people talking about "but a computer beat a grand master in chess!" -- yes: Such an outcome was hardly unforeseeable. The guy that invented Unix was writing software that could beat masters in Chess in the early 80s. Chess is a poor surrogate for human ability, confined as it is to a limited and calculable problem set with clearly defined boundaries. Chess is susceptible to brute force solutions. Driving is not.

 

 

There is a clear inability of engineers working on self-driving cars to recognize the limitations of technology.

 

Hubert Dreyfus, guys, explained decades ago the fundamental limitations of logic gate based computing (and it's all logic gate based computing) as a simulacrum of consciousness.

 

The problem, in brief, is this:

 

The rules of the road are unquestionably a language. A dashed yellow line means something. A solid white line means something. These meanings vary from country to country and exist as conventions, in the same way that, for instance, lion, has meaning in English. Granted, the rules of the road are neither as expressive nor as nuanced as spoken language but--and this is the key point--they are apprehended in the same fashion.

 

Human beings are able to follow the rules of the road because they already have language acquisition skills.

 

More importantly, language is a component in the larger and preexisting realm of consciousness. Language enables shared experience and communication between individuals who have their own isolated consciousnesses. I see a lion, I tell you, "look at that lion over there" and you understand those words within your own consciousness. You recognize the "lion" from within your field of vision because the two of us have a more or less shared understanding of what that word means within the larger conscious framework.

 

How that relates to the rules of the road is that if you have two individuals driving toward each other on a street with a dashed yellow line in the middle, both drivers (in the US, Canada, etc.) understand the meaning of that dashed yellow line: 'keep right, oncoming traffic to the left'. It's language: it's an arbitrary sign that communicates a clear message to more than one individual.

 

Now, here is where the problem comes in for AI.

 

Imagine that I see a lion. Instead of saying, "look at that lion", I say, "LOOK!"

 

You, almost certainly, will focus your attention on the lion.

 

Why?

 

Because language exists within a larger consciousness. We do not process our environment in language, we extract language from our environment. We are--to quote Heidegger--always already in our environment and we do not need language to understand it. We understand that, in this particular hypothetical instance, a lion is something noteworthy, and we don't need to be told explicitly to look at it.

 

How that applies to the limitations of AI and self-driving cars:

 

Imagine that you are driving down an unmarked road, with no oncoming traffic.

 

"Oh. That's easy. Just bear right."

 

But what if it's a two-lane one-way?

 

"Oh. Well it should be pre-programmed to identify that it's on a street that's a two-lane one-way"

 

What if it's a two lane one-way at certain times of the day and a two way street at other times? What if it's a two-lane one-way that is only a one-lane one-way during part of the day due to allowed parking on one side of the street? (all of these are common instances in urban environments)

 

"Oh. Well that should also be pre-programmed into the system."

 

By whom? And at what cost? And to what degree of reliability? Who, in short, is responsible for all of the time and money required to provide the sanitized and unambiguous data that a computer needs to mimic--in this small instance--the remarkable ability of the human mind to infer and interpret?

 

And then who is responsible when the computer inevitably fails 'ungracefully' when presented with conflicting data (e.g. ambiguous street markings due to construction)

 

 

And for individuals who would assure us that all of this granular data can be easily accumulated and stored---there isn't a national database of sales tax districts, and that information is much simpler and easier to track.

 

 

 

That would require immaculate transportation network maintenance, as compared to the current situation, and a wholesale elimination of people-driven vehicles.

 

Otherwise, you're dealing with exactly the scenarios I outlined above: the frequent occurrence of situations that must be interpreted, and computers are horrible at interpretation and not getting better at it.

 

WOW you are so upset about this.

 

WOW! You actually think you are smarter than hundreds of thousands of skilled engineers, scientists, and programmers.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...