jtotheizzoe:

Wave Your Stamens in the Air Like Ya Just Don’t Care!

I feel like time travel, while perhaps scientifically infeasible, can be achieved technologically by manipulating the scale of time rather than our position along its arrow. This allows us to leave the present behind, and experience a sort of alternate reality.

Those alternate realties are places where flowers are not mere bee-buffets, decorative flourishes and aromatic embellishments. They are dynamic symbols of awakening, floral fireworks, like nature’s way of saying “Good morning! What do you have for me today!?

I give you Flowers by Katka Pruskova, 730 hours of blooming buds condensed to mere minutes.

Wash it down with an animated version of Richard Feynman’s epic “Ode To A Flower”.

jtotheizzoe:

When you stop to realize that almost everything in this video is created by the patient efforts of either wind or water, it makes you step back and say “wow” just that much louder.

Your evening moment of zen: Landscapes: Volume 3 by Dustin Farrell

thenewenlightenmentage:

Astronomers measure nearby Universe’s ‘cosmic fog’
Researchers from the Laboratoire Leprince-Ringuet (CNRS/École Polytechnique) have carried out the first measurement of the intensity of the diffuse extragalactic background light in the nearby Universe, a fog of photons that has filled the Universe ever since its formation. Using some of the brightest gamma-ray sources in the southern hemisphere, the study was carried out using measurements performed by the HESS telescope array, located in Namibia and involving CNRS and CEA. The study is complementary to that recently carried out by the Fermi-LAT space observatory. These findings provide new insight into the size of the Universe observable in gamma rays and shed light on the formation of stars and the evolution of galaxies. They feature on the cover of the 16 January 2013 issue of the journal Astronomy & Astrophysics online. Continue Reading

thenewenlightenmentage:

Astronomers measure nearby Universe’s ‘cosmic fog’

Researchers from the Laboratoire Leprince-Ringuet (CNRS/École Polytechnique) have carried out the first measurement of the intensity of the diffuse extragalactic background light in the nearby Universe, a fog of photons that has filled the Universe ever since its formation. Using some of the brightest gamma-ray sources in the southern hemisphere, the study was carried out using measurements performed by the HESS telescope array, located in Namibia and involving CNRS and CEA. The study is complementary to that recently carried out by the Fermi-LAT space observatory. These findings provide new insight into the size of the Universe observable in gamma rays and shed light on the formation of stars and the evolution of galaxies. They feature on the cover of the 16 January 2013 issue of the journal Astronomy & Astrophysics online.

Continue Reading

(via anndruyan)

infinity-imagined:

3D animation of Melotte 15 by J-P Metsavainio

climateadaptation:

Colonel Chris Hadfield currently lives on the International Space Station. He posts pictures of his work and the earth and your we brain will explode. colchrishadfield.tumblr.com

Canadian Astronaut, currently living in space aboard ISS as Flight Engineer on Expedition 34, to be Commander of Expedition 35.

I wonder if this guy has an NSFW blog as well.

(via astrotastic)

spaceplasma:

Richard Feynman - Ode To A Flower

From the BBC Interview for Horizon ‘The Pleasure of Finding Things Out’.
Animated by Fraser Davidson

(via likeaphysicist)

backyardpolitics:


The Long, Strange Journey of Einstein’s Brain : NPR
Albert Einstein died 50 years ago Monday. While that day marked the end of his life, it was only the beginning of a long, strange journey for his brain.
Thomas Harvey, a doctor at the hospital where Einstein died, removed the famous scientist’s brain and kept it with him over the next four decades. Harvey wanted to know what made Einstein a genius.
As Brian Burrell writes in his new book Postcards from the Brain Museum, Harvey wasn’t alone.
Scientists have long sought to understand the nature of genius and before computers and imaging technology, they had few options other than studying the actual brain.
Burrell discusses the long, strange journey of Einstein’s brain.

read more (via @gkbeg)

backyardpolitics:

The Long, Strange Journey of Einstein’s Brain : NPR

Albert Einstein died 50 years ago Monday. While that day marked the end of his life, it was only the beginning of a long, strange journey for his brain.

Thomas Harvey, a doctor at the hospital where Einstein died, removed the famous scientist’s brain and kept it with him over the next four decades. Harvey wanted to know what made Einstein a genius.

As Brian Burrell writes in his new book Postcards from the Brain Museum, Harvey wasn’t alone.

Scientists have long sought to understand the nature of genius and before computers and imaging technology, they had few options other than studying the actual brain.

Burrell discusses the long, strange journey of Einstein’s brain.

read more (via @gkbeg)

(via neurosciencestuff)

sagansense:

Space Station Opens Launch Pad for Tiny Satellites

Image 1: The satellites were released outside the Kibo laboratory using a Small Satellite Orbital Deployer attached to the Japanese module’s robotic arm on Oct. 4, 2012. Japan Aerospace Exploration Agency astronaut Aki Hoshide, flight engineer, set up the satellite deployment gear inside the lab and placed it in the Kibo airlock.
CREDIT: NASA

Image 2: TechEdSat measures only 10 centimeters across and cost less than $30,000. This image was released on Oct. 4, 2012.
CREDIT: NASA

Astronauts on the International Space Station have transformed their high-flying laboratory into a new kind of launch pad for tiny satellites in a bid to boost student interest and access to space.

This month, the space station’s Expedition 33 crew launched five tiny Cubesats, each only a few inches wide, using a small satellite orbital deployer from Japan’s space agency JAXA. They were the first Cubesat satellites ever launched from the International Space Station, coming 2 1/2 years after NASA announced the CubeSat program.

“This was a learning experience for everyone,” said Andres Martinez, the NASA Ames project manager for one of the satellites.

The cubesats were launched from the station’s Japanese Kibo laboratory on Oct. 4, which also marked the 55th anniversary of the world’s first satellite launch in 1957 that placed Russia’s Sputnik 1 in orbit and ushered in the Space Age.

“Fifty-five years ago we launched the first satellite from Earth. Today we launched them from a spacecraft,” space station commander Sunita Williams of NASA said on launch day to mark the moment. “Fifty years from now, I wonder where we’ll be launching them from.”

The JAXA satellite-deploying device arrived at the station aboard a Japanese cargo ship in July. Japanese astronaut Akihiko Hoshide placed the deployer, which is about the size of a small rabbit cage, into a small airlock in the Kibo lab. Then, the astronaut sealed the airlock, opened it up to space, and commanded the station’s Kibo robotic armto pick up the deployer and bring it outside for satellite deployment.

All told, the procedure took only four hours of astronaut time – with no spacewalk required.

“If you can imagine, deploying satellites from station can be quite risky,” Martinez said. “We were going through that whole experience of conducting analysis to ensure this would be something safe to do from station, not only from the point of deployment but also taking up the satellites inside station.”

Small satellite evolution

One of the cubesats launched from the space station was TechEdSat, a 10-centimeter-wide (3.9 inches) satellite that Martinez oversaw. Students at San Jose State University were responsible for most of the design and development work.

The students are operating a ground station where they will be able to listen to signals from TechEdSat. The satellite periodically sends out packets of data with information about its temperature, orbit and other parameters explaining its environment in space. The project cost about $30,000, excluding labor and launch costs.

“It’s a huge STEM success,” Martinez said, referring to NASA’s program for attracting students to the fields of science, technology, engineering and mathematics. He added that NASA Ames made sure the students were prepared to meet the rigorous standards of the design and development process.

Anything that goes on to the space station must meet strict safety standards, including making sure there is no fire risk. Satellites in particular have items such as batteries and wires on board.

The students “were not put in a cage with a bunch of lions,” Martinez said. Instead “we prepped them and worked with them, and a couple of [advisers] attended those meetings in person.”

The satellite cube is expected to exceed its initial design lifetime of a month, but Martinez declined to give specifics because the final parameters for the design and orbit have not been analyzed yet.

Only one major objective will be unmet. Initially the satellite was supposed to compare OrbComm and Iridium communications techniques in space, but there was not enough time to meet the licensing requirements before the launch date.

Five satellites, one catapult

Of the other four satellites released Oct. 4, one of them, F-1, was a collaboration of Houston-based space hardware developer NanoRacks, Uppsala University in Sweden and FPT University in Vietnam.

The other three satellites were from institutions working with the JAXA. The satellites were called RAIKO, WE WISH and FITSAT-1. The latter satellite is designed to write messages in the sky inMorse code, with the aim of letting researchers test out optical communication techniques.

NASA chose to release the satellites in two batches to minimize the chances of collision with the station, Martinez said.

As the satellites have no maneuvering capability, NASA calculated a trajectory that would make it very unlikely that the cubes’ orbit would ever intersect with that of the station.

With the success of the launches, NASA is on its way to reducing the expense of civilian access to space. It is cheaper to deploy a satellite from the space station than from Earth.

“The whole idea is about lowering cost,” said Victor Cooley, the space station’s Expedition 33/34 lead increment scientist, in a recent interview on NASA Television. “If we can lower the cost by these payloads being a secondary payload on a rocket ― or, in this case, HTV [a Japanese cargo vehicle] which is already carrying cargo to the station ― that makes it an even lower cost for the smallsats to be deployed.”

There are no firm plans for when NASA and JAXA will do such an exercise again, but Martinez says there is a “very large pool” of students and engineers who are eager to take part.

“It got attention to the very top of NASA, and everyone is super-excited,” he said.

staceythinx:

Selections from The Rice Field by Haruto Maeda

Maeda on his project:

Traveling in Asia or Africa, my heart warmed to rural scenery, beautiful nature, and tolerance of people in these regions. My connection to these areas was strengthened by my recollection of my boyhood memories of the pastorial scenes of my native Japan etched in my memory. My memories of the sea, forest, waterfalls and rice fields resonated in monochrome. The existence of the rice field, the staff of life to the Japanese, has had a great effect on the scenery, culture and the sense of morality of the Japanese. However, due to unplanned development forced upon many rural areas of Japan, the beauty and history of over 2000 years of rice harvesting is decreasing every year. Fearing the loss in a few years, I hurriedly began to photograph these areas. The scenery is silent, but the power of the photograph can touch people’s mind and make them recognize the importance of the cultural icon of the rice field.

(via sagansense)

discoverynews:

howstuffworks:

How the Tesla Turbine Works:

Most people know Nikola Tesla, the eccentric and brilliant man who arrived in New York City in 1884, as the father of alternating current, the form of electricity that supplies power to almost all homes and businesses. But Tesla was a prodigious inventor who applied his genius to a wide range of practical problems. All told, he held 272 patents in 25 countries, with 112 patents in the United States alone. You might think that, of all this work, Tesla would have held his inventions in electrical engineering — those that described a complete system of generators, transformers, transmission lines, motor and lighting — dearest to his heart. But in 1913, Tesla received a patent for what he described as his most important invention. That invention was a turbine, known today as the Tesla turbine, the boundary layer turbine or the flat-disk turbine.

Interestingly, using the word “turbine” to describe Tesla’s invention seems a bit misleading. That’s because most people think of a turbine as a shaft with blades — like fan blades — attached to it. In fact, Webster’s dictionary defines a turbine as an engine turned by the force of gas or water on fan blades. But the Tesla turbine doesn’t have any blades. It has a series of closely packed parallel disks attached to a shaft and arranged within a sealed chamber. When a fluid is allowed to enter the chamber and pass between the disks, the disks turn, which in turn rotates the shaft. This rotary motion can be used in a variety of ways, from powering pumps, blowers and compressors to running cars and airplanes. In fact, Tesla claimed that the turbine was the most efficient and the most simply designed rotary engine ever designed.

If this is true, why hasn’t the Tesla turbine enjoyed more widespread use? Why hasn’t it become as ubiquitous as Tesla’s other masterpiece, AC power transmission? These are important questions, but they’re secondary to more fundamental questions, such as how does the Tesla turbine work and what makes the technology so innovative? We’ll answer all of these questions on the next few pages. But first, we need to review some basics about the different types of engines developed over the years.

Keep reading…

it said tesla. i got excited. i do not regret my actions.

supernovaexplosion:

Whoa.

Theresa Klein talks about Achilles, the first machine to move in a biologically accurate way.  
“Our robot, named Achilles, is the first to walk in a biologically accurate way. That means it doesn’t just move like a person, but also sends commands to the legs like the human nervous system does.
Each leg has eight muscles—Kevlar straps attached to a motor on one end and to the plastic skeleton on the other. As the motor turns, it pulls the strap, mimicking the way our muscles contract. Some of Achilles’ muscles extend from the hip or thigh to the lower leg so they can project forces all the way down the limb. This allows us to put most of the motors in the hips and thighs. Placing them up high keeps the lower leg light, so that it can swing quickly like a human’s lower leg.
In people, neurons in the spinal column send out rhythmic signals that control our legs. It’s like a metronome, and sensory feedback from the legs alters the pace. Your brain can step in to make corrections, but it doesn’t explicitly control every muscle, which is essentially why you can walk without thinking about it. For our robot, a computer program running off an external PC controls movement in a similar way. With each step, the computer sends a signal to flex one hip muscle and extend the other. The computer changes the timing of those signals based on feedback from the legs’ load and angle sensors. A similar control system handles the lower muscles.
Modeling human movement has applications outside of robotics. It could also help us understand how people recover after spinal-cord injuries, for example. But our robot is still a very simplified model—it has no torso and can’t handle complex terrain. Initially, we also had a problem with its feet slipping. We thought about different types of rubber to give its feet more grip but eventually realized a solution already exists. Now, the robot wears a pair of Keds.”

Theresa Klein talks about Achilles, the first machine to move in a biologically accurate way.

“Our robot, named Achilles, is the first to walk in a biologically accurate way. That means it doesn’t just move like a person, but also sends commands to the legs like the human nervous system does.

Each leg has eight muscles—Kevlar straps attached to a motor on one end and to the plastic skeleton on the other. As the motor turns, it pulls the strap, mimicking the way our muscles contract. Some of Achilles’ muscles extend from the hip or thigh to the lower leg so they can project forces all the way down the limb. This allows us to put most of the motors in the hips and thighs. Placing them up high keeps the lower leg light, so that it can swing quickly like a human’s lower leg.

In people, neurons in the spinal column send out rhythmic signals that control our legs. It’s like a metronome, and sensory feedback from the legs alters the pace. Your brain can step in to make corrections, but it doesn’t explicitly control every muscle, which is essentially why you can walk without thinking about it. For our robot, a computer program running off an external PC controls movement in a similar way. With each step, the computer sends a signal to flex one hip muscle and extend the other. The computer changes the timing of those signals based on feedback from the legs’ load and angle sensors. A similar control system handles the lower muscles.

Modeling human movement has applications outside of robotics. It could also help us understand how people recover after spinal-cord injuries, for example. But our robot is still a very simplified model—it has no torso and can’t handle complex terrain. Initially, we also had a problem with its feet slipping. We thought about different types of rubber to give its feet more grip but eventually realized a solution already exists. Now, the robot wears a pair of Keds.”

(via sagansense)


Study clarifies process controlling night vision
New research reveals the key chemical process that corrects for potential visual errors in low-light conditions. Understanding this fundamental step could lead to new treatments for visual deficits, or might one day boost normal night vision to new levels.
Like the mirror of a telescope pointed toward the night sky, the eye’s rod cells capture the energy of photons - the individual particles that make up light. The interaction triggers a series of chemical signals that ultimately translate the photons into the light we see.
The key light receptor in rod cells is a protein called rhodopsin. Each rod cell has about 100 million rhodopsin receptors, and each one can detect a single photon at a time.
Scientists had thought that the strength of rhodopsin’s signal determines how well we see in dim light. But UC Davis scientists have found instead that a second step acts as a gatekeeper to correct for rhodopsin errors. The result is a more accurate reading of light under dim conditions.
A report on their research appears in the October issue of the journal Neuron in a study entitled “Calcium feedback to cGMP synthesis strongly attenuates single photon responses driven by long rhodopsin lifetimes.”

Study clarifies process controlling night vision

New research reveals the key chemical process that corrects for potential visual errors in low-light conditions. Understanding this fundamental step could lead to new treatments for visual deficits, or might one day boost normal night vision to new levels.

Like the mirror of a telescope pointed toward the night sky, the eye’s rod cells capture the energy of photons - the individual particles that make up light. The interaction triggers a series of chemical signals that ultimately translate the photons into the light we see.

The key light receptor in rod cells is a protein called rhodopsin. Each rod cell has about 100 million rhodopsin receptors, and each one can detect a single photon at a time.

Scientists had thought that the strength of rhodopsin’s signal determines how well we see in dim light. But UC Davis scientists have found instead that a second step acts as a gatekeeper to correct for rhodopsin errors. The result is a more accurate reading of light under dim conditions.

A report on their research appears in the October issue of the journal Neuron in a study entitled “Calcium feedback to cGMP synthesis strongly attenuates single photon responses driven by long rhodopsin lifetimes.

(via neurosciencestuff)