The camera never lies...
It’s never easy to get everyone looking just right when taking a group photograph.(BBC News)
what it means to photograph reality.
Google’s latest smartphones released last week, the Pixel 8 and Pixel 8 Pro, go a step further than devices from other companies. They are using AI to help alter people’s expressions in photographs.
It’s an experience we’ve all had: one person in a group shot looks away from the camera or fails to smile. Google’s phones can now look through your photos to mix and match from past expressions, using machine learning to put a smile from a different photo of them into the picture. Google calls it Best Take.
Further, they allow a user to do away with a person or building among others in the picture, filling in the gaps using an element referred to as Magic Editor. This is called deep learning which is a type of artificial intelligence technique, whereby the algorithm predicts how the texture will be within gaps seen among the pixels based on the knowledge that it obtains from millions of other images.
Not necessarily are those photographs shot on the phone only. Applying the magic editor or best take on any of photos in your google photo library using your pixel 8 pro.
It is essential to recall that the application of AI in smartphone photography does not intend to produce photos similar to reality as claimed by professor Rafal Mantiuk, a specialist on graphics and displays at the University of Cambridge.
He says, “People do not wish to record reality.” Pictures look nice on their screens, but are not accurate representations of reality. That’s what it is all about for them.
Machine-based learning supplements missing data from smartphone’s photos because these gadgets possess certain physical limits.
It assists one to increase zoom, capture pictures at night, and in the case of Google’s magic editor mode, it adds additional objects into the photo that can be replaced by others.
Prof. Rafal Mantiuk, who is a specialist on graphics and display at University Cambridge said we should always acknowledge that it is not about making artificial photos appear as if they are true images.
”People do not want to record reality as it is”, he remarked. They are going for beautiful photos, and all smartphone photo pipe should make photographs look good — not realistic.
Smartphones use machine learning to supplement missing pieces of data in a photo, because of their limited physical capabilities.
This enhances their capacity for zooming, capturing in low-light settings and in magic editor mode where even Google swaps a smile for a frown.
Altering images should never surprise anyone because it started way back in the field. However, nothing else has ever been more feasible than enhancing actuality with the help of AI.
Several months ago, Samsung found itself at the centre of a storm over the techniques it employed for enhancing lunar pictorial effects via Deep Learning models._: No matter how poor images taken, it provided usable photos.
So what I mean is – your Moon picture was not necessarily a picture of the Moon that you were looking at.
The company recognized the criticism and stated they were trying “to lessen any possible confusion which can arise between capturing the real moon on camera and an image of the moon”.
Changing images is not a novelty – it’s as old as art. However, has it ever been easier for the fake to augment the real through the use of artificial intelligence?
In particular, earlier this year, Samsung was criticized for its use of deep learning algorithms to enhance photos of the Moon taken on its smartphones. You would get usable images no matter how bad of an image, to start within the tests they did.
Put simply – that wasn’t necessarily the Moon you were looking at in your Moon photo.
One statement by the company was as follows; “the company is striving to cut out any possible mix-ups between snapping a live moon or screen shots”.
According to Reynolds, Google includes metadata with their pictures by attaching a digital trail about an image or tagging them if artificial intelligence was employed.
This is something we will discuss within our team. And we’ve talked at length. Since we’ve been working on these things for years. He says, “It’s something that we have, an exchange with our users, listening to what they say.”
The fact that the AI-based features are central in Google’s marketing advertisement proves that it is confident the consumers will agree.
Then, where does Google draw the line in terms of image manipulation?
According to Mr Reynold, the discussion surrounding the use of artificial intelligence was too elaborate for one to draw a demarcation line that went beyond the boundary limit.
He states, “As you delve further in developing features one appreciates that a line is more or less of an overstating of the most complicated one-up of feature by feature choice.”
In addition, these new technologies increase ethical debate over what is and what isn’t reality.” However, we still have limited vision through our own eyes.”
He said: We are able to notice crisp color images since the brain has the ability to compute information and make inferences.
Therefore, if people argue that the camera does “false things”, they should be aware that the human brain does such things in a different manner.
Reynolds claims on Google’s new tech that the firm applies metadata to their images which are an indication of how to identify AI in a photo by means of an industry standard.
This is a sensitive question we speak of internally. And we’ve talked at length. Therefore, we have been busy with them for many years. He says, “It’s a discussion; we listen to each of our users.”
It is obvious that Google has complete confidence in the users’ consent and it uses the AI-centered advertisement campaign for its new phones.
Therefore, where does Google drawn the line in image editing?
A statement by Mr Reynolds would be that the debate on whether or not to use artificial intelligence is too complicated for one to just draw a line and say this is too far.|ilton, A. J., Gargano, M. A., & DeSario, R. A.
He explains that it dawns on him as he delves deeper into building features that a line is actually over simplifying what turns out to be a very complicated “feature by feature” decision.
Even as these new technologies bring up questions on what it really means by what’s real and what’s isn’t, Professor Mantiuk suggests that in consideration of the same issues, even our eyes may not always show everything.
He said: We see distinct and bright images since our brain can regenerate data and deduce or infer even what seems absent.
“Therefore, you can bitch-slap camera for “faking things”, whereas the human brain is not much different.”
(Getty Images)