Delicious Ambiguity

We are the music makers,
And we are the dreamers of dreams,
Wandering by lone sea-breakers,
And sitting by desolate streams;—
World-losers and world-forsakers,
On whom the pale moon gleams:
Yet we are the movers and shakers
Of the world for ever, it seems.
ohnofixit:

truthtellingtime:

Just so everybody knows, the mirror is actually more reliable than the camera. Even though people say “the camera never lies”, it distorts your photographs a little bit. It has to turn a 3d image (you in real life) to a 2d image (a photograph) and consequently skews the proportions a little bit.
Also, “photogenic” is a real thing. Certain faces photograph well and others don’t. It’s all down the angles, proportions and size of your features.
Have you ever seen someone stunning who looks great in professional photographs and not in candids? Yeah, that’s because there’s a huge difference between a professional and an amateur. Professionals know how to minimise the issues cameras have. Lighting, angles and even the distance you are away from the camera plays a part (the amount of distortion varies depending on how close you are).
TL;DR if you think you look great in the mirror but not in the photo, trust the mirror. You look great!

Pretty sure mirrors also convert 3D images into 2D images.
*edit* Thanks to the people who pointed out that depth perception still works in mirrors because both eyes see the reflected light from slightly different angles.

ohnofixit:

truthtellingtime:

Just so everybody knows, the mirror is actually more reliable than the camera. Even though people say “the camera never lies”, it distorts your photographs a little bit. It has to turn a 3d image (you in real life) to a 2d image (a photograph) and consequently skews the proportions a little bit.

Also, “photogenic” is a real thing. Certain faces photograph well and others don’t. It’s all down the angles, proportions and size of your features.

Have you ever seen someone stunning who looks great in professional photographs and not in candids? Yeah, that’s because there’s a huge difference between a professional and an amateur. Professionals know how to minimise the issues cameras have. Lighting, angles and even the distance you are away from the camera plays a part (the amount of distortion varies depending on how close you are).

TL;DR if you think you look great in the mirror but not in the photo, trust the mirror. You look great!

Pretty sure mirrors also convert 3D images into 2D images.

*edit* Thanks to the people who pointed out that depth perception still works in mirrors because both eyes see the reflected light from slightly different angles.

(Source: owlygem, via madsunrises)

anthonyedwardstarks:

Author Chuck Palahniuk first came up with the idea for the novel after being beaten up on a camping trip when he complained to some nearby campers about the noise of their radio. When he returned to work, he was fascinated to find that nobody would mention or acknowledge his injuries, instead saying such commonplace things as “How was your weekend?” Palahniuk concluded that the reason people reacted this way was because if they asked him what had happened, a degree of personal interaction would be necessary, and his workmates simply didn’t care enough to connect with him on a personal level. It was his fascination with this societal ‘blocking’ which became the foundation for the novel. 

(via beepboopboopbeep)

anotherjourneybytrain:

australian-government:

I hate when people ask questions during movies like do you not understand that the movie purposly doesn’t tell you things in order to build suspense

"Who are they?" "What’s going on?" I DON’T KNOW, I HAVE BEEN WATCHING THE FILM AT THE SAME TIME AS YOU, I DID NOT WRITE THE FUCKING SCRIPT.

(via yeoldefucknsuch)

allthingslinguistic:

zmyaro:

To any Tumblrites who are deaf, hard of hearing, know people who are, or just enjoy cool tech, a start-up called MotionSavvy is working on technology that uses Leap Motion to recognize sign language and and outputs written or spoken English.  The project was started by a group of deaf students at RIT’s National Technical Institute for the Deaf (yay RIT!) who moved to San Francisco to develop the product with Leap.

The team has over 800 deaf beta testers, but they are looking for more.  They hope to have a product available to consumers by September of 2015.

For more information, check out this TechCrunch article and this video.

The links are definitely worth checking out: according to the TechCrunch article, the prototype only understands about 100 words at the moment, but they’re working on more with the beta testers. I’m guessing it’ll probably be realistic to eventually expect a level comparable to other types of machine translation (Google Translate, etc.), which although by no means perfect is still very useful. 

(via irishcoffeeandcream)