293
Denis Perevalov Lections on OpenFrameworks and Interactive Multimedia Interactive Multimedia 1. Interactive multimedia systems. Introduction to openFrameworks 2. Interactive art (lection by Ksenia Fedorova) Graphics 3. Two-dimensional graphics 4. Shaders Sound 5. Interactive sound Computer Vision 6. Introduction to Computer Vision. Grabbing and processing camera images 7. OpenFrameworks and OpenCV Communication with external devices and programs 8. Communicating with other programs via OSC 9. Connecting external sensors using Arduino Contacts: [email protected] Ural Federal University, Ekaterinburg, Russia See in-depth details in my book “Mastering openFrameworksBook’s examples are free, see masteringof.wordpress.com

Lections on Open Frameworks

Embed Size (px)

DESCRIPTION

Lections on OpenFrameworks and interactive multimedia. We will discuss art, graphics, sound, computer vision, opencv, OSC and Arduino - all in connection to OpenFrameworks, with many examples.

Citation preview

Page 1: Lections on Open Frameworks

Denis Perevalov

Lections on OpenFrameworksand Interactive Multimedia

Interactive Multimedia

1. Interactive multimedia systems. Introduction to openFrameworks

2. Interactive art (lection by Ksenia Fedorova)

Graphics

3. Two-dimensional graphics

4. Shaders

Sound

5. Interactive sound

Computer Vision

6. Introduction to Computer Vision. Grabbing and processing camera images

7. OpenFrameworks and OpenCV

Communication with external devices and programs

8. Communicating with other programs via OSC

9. Connecting external sensors using Arduino

Contacts: [email protected] Federal University, Ekaterinburg, Russia

See in-depth details in my book

“Mastering openFrameworks”

Book’s examples are free, see

masteringof.wordpress.com

Page 2: Lections on Open Frameworks

1. Interactive multimedia systems. Introduction to openFrameworks

Page 3: Lections on Open Frameworks

Preface

What is interactive multimedia system?

Page 4: Lections on Open Frameworks

Interactive multimedia system

Page 5: Lections on Open Frameworks

Examples

FunkyForest

Emily Gobeille and Theodore Watson (OpenFrameworks creator)for Festival 2007 CineKid in the Netherlands http://zanyparade.com/v8/projects.php?id=12

Page 6: Lections on Open Frameworks

Examples

Hand from above

by Chris O'Shea

Page 7: Lections on Open Frameworks

Examples

Body Paint

by Mehmet Akten

Page 8: Lections on Open Frameworks

Definition

Interactive multimedia system is hardware and software multimedia system which

1) Is the real-time system.

2) Can input data by using various sensors, cameras and other sources of signals.

3) Can output data by the graphics, sound, haptics, robotics and other devices.

Page 9: Lections on Open Frameworks

Interactive Multimedia System

Computer vision Computer graphics

Computer sound / music

"Physical Computing”

Concept

Equipment and installation

The idea

Development

Implementation

Design

Page 10: Lections on Open Frameworks

Ways of developing interactive systems

- Low-Level Libraries- Middle-Level Platforms- High-level Environments

Page 11: Lections on Open Frameworks

Low-level libraries

Processing, analysis and image recognition

(Open Graphics Library)High-speed graphics

(Open Computing Language)Parallelization and speed up the calculations, in particular, means GPU.

(Open Audio Library)Sound

Web serverBox2D - 2D physics engine

Bullet - 3D physics engine

and so on ...Video 1 Video 2

Page 12: Lections on Open Frameworks

Middle-level Platforms

This is a platform for "Creative coding", includes a large set of functions and libraries that are integrated for convenient Programming.

openFrameworks

Language: C / C + +

Processing

Language: JavaFor computer vision Java is slow.

Cinder

Language: C / C + +Recently appeared,gaining popularity.

Video 1 Video 2 Video 3

Page 13: Lections on Open Frameworks

High-level environments

“Visual programming“ environments, which allows to implement projects without actual programming. It is important that they can be expanded by the plugins made with low-level libraries. Also they can work together with middle-level platforms.

Max / MSP / Jitter

Focused on audio and VJ-ing.

VVVV

Focused on visual effects.

Unity3D

Focused on high-quality 3D.

Example: Sniff - Interactive Dog, Unity3D + OpenFrameworkshttp://www.vimeo.com/6400266

Page 14: Lections on Open Frameworks

Fields of application

Using only computer vision and computer graphics (and sound)currently produces a wide range

- Advertising,- Entertainment,- Training, - Scientific,- Health,- Art

interactive systems.

Page 15: Lections on Open Frameworks

Course Description

Page 16: Lections on Open Frameworks

What we will do

(1) The main interest

- Creation of interactive multimedia-based systems Recognition of video and audio signals.

(2) Moderate interest

- 3D graphics, physics simulation, sound generation.

(3) Basis for application development - OpenFrameworks

Page 17: Lections on Open Frameworks

Course Content

1. Introduction to OpenFrameworks

- Basics OpenFrameworks. General principles of real-time systems, vision systems and interactive systems. - 2D-graphics. - Receiving and processing images from the camera, the basics OpenCV. - Receiving and processing of sound.- Generation of sound, the playback of audio samples.- 3D-graphics.- Basic mapping. - Meet with high-level programs Unity3D, MultitouchDesigner, QuartzComposer. Installing Openframeworks connection with them.- Connecting external devices via the Arduino.

Page 18: Lections on Open Frameworks

Course Content

2. Lecture "The Strategy of interactive art."

by Ksenia Fedorova(Curator of Yekaterinburg r la National Centre for Contemporary Art,Ven. factor is the art and culture, Postgraduate Department. aesthetics, ethics, theory and cultural history Philosophical Faculty USU)

Page 19: Lections on Open Frameworks

Course Content

3. Working on your projects

Listeners will be asked to perform under our supervision a number of projects related to video analysis and generation of graphics / sound.

Page 20: Lections on Open Frameworks

Recommended Reading

OpenFrameworks

Joshua Noble "Programming Interactivity: A designer's Guide to Processing, Arduino,and openFrameworks "

For the translation of the book to your language usehttp://www.onlinedoctranslator.com/translator.html

Page 21: Lections on Open Frameworks

Links

OpenFrameworks homepagewww.openframeworks.cc

OpenFrameworks applications listwww.creativeapplications.net/category/openframeworks/

Page 22: Lections on Open Frameworks

Introductionin OpenFrameworks

Page 23: Lections on Open Frameworks

What is OpenFrameworks

OpenFrameworks -open library (framework) for C++,designed to work with multimedia.

Developed as a tool for designers and artists working with interactive design and media art.

One of the main advantages OpenFrameworks - extreme ease of use.

It is very popular tool for creating interactive systems, image processing, installations and various projects working with graphics, sound and input / output to external devices.

Page 24: Lections on Open Frameworks

Where does OpenFrameworks

Windows

Mac OS X

Linux

iPhone OS

Page 25: Lections on Open Frameworks

History of creation

OpenFrameworks was developed by:Zach Lieberman,Theo Watson, Arturo Castro, Chris O'Shea,together with other members of the Parsons School of Design, MediaLabMadrid, Hangar Center for the Arts, etc.

Start of development - in the Parsons School of Design (New York) where Lieberman was a student.

Page 26: Lections on Open Frameworks

When to use OpenFrameworks

- Multimedia capabilities are needed (input-output video input / output audio connection of external devices)

- High-speed analysis of input data is needed (which is usually can be made only on the C/C + +)

- A rather simple logic of data processing“calculate->draw->calculate->draw->..."

- No or little amount of GUI is needed

- A small amount of time for developing is desirable

http://www.flong.com/projects/tables/

Page 27: Lections on Open Frameworks

When not use OpenFrameworks

- Requires a lot of GUI (text editor)instead, use development tools, GUI - QT, Cocoa, ...

- Require complex logic control rendering (3d game)instead, use engines like Unity3D, ...

- Multimedia capabilities are not needed (web server)

-You have money & time & desire for industrial application development so you can create your project from a number of low-level libraries.

Page 28: Lections on Open Frameworks

Application structure

Architecture design of openFrameworks aimed to handling multimedia information in real time.

This results in a- Application appearance- Application structure.

Page 29: Lections on Open Frameworks

Application appearance

Normally openFrameworks application has two windows - a graphics window and console window for the logs.

Page 30: Lections on Open Frameworks

Application structure

Page 31: Lections on Open Frameworks

Application structure

Page 32: Lections on Open Frameworks

Getting Started with OpenFrameworks

1. Install VisualStudio C++ Express Edition

2. Download and install OpenFrameworkshttp://www.openframeworks.cc/download - There select "visual studio 2010 FAT""FAT" means that many addons and examples are included.

3. Test installation.Go to the folder openFrameworks/app/examples, then compile and run one example.

Page 33: Lections on Open Frameworks

Create a new project "Pendulum"with OpenFrameworks1. In the folder openFrameworks/app/examples take an example emptyProject and copy it to /App/myApps/Mayatnik

2. Enter the code in the function of your class that derives from ofBaseApp

setup (); //set parameters at startupupdate (); //computation, analysis of input data draw (); //draw the current state

Page 34: Lections on Open Frameworks

An example of a pendulum with an elastic band

Page 35: Lections on Open Frameworks

The text of the "pendulum" testApp.h

// You don’t need to change this file

# Ifndef _TEST_APP# Define _TEST_APP

# Include "ofMain.h"

class testApp: public ofBaseApp {

public:void setup ();void update ();void draw ();

void keyPressed (int key);void keyReleased (int key);void mouseMoved (int x, int y);void mouseDragged (int x, int y, int button);void mousePressed (int x, int y, int button);void mouseReleased (int x, int y, int button);void windowResized (int w, int h);

};

# Endif

Page 36: Lections on Open Frameworks

The text of the "pendulum" testApp.cpp, p. 1 / 3

# Include "testApp.h" // this class is already defined// Variables forofPoint pos0; // Point of suspensionofPoint pos; // Current positionofPoint velocity; // Speed

// Setvoid testApp:: setup () {

pos0 = ofPoint (500.0, 200.0);pos = ofPoint (700.0, 200.0);velocity = ofPoint (0.0, 0.0);

// OpenFrameworks it will redraw background each frame//with this color: red=255, green=255, blue=255, i.e. “white”ofBackground (255,255,255);

}

Page 37: Lections on Open Frameworks

The text of the "pendulum" testApp.cpp, p. 2 / 3

// Update the statevoid testApp:: update () {

// Calculate the Hooke’s force ofPoint delta = pos - pos0;float len = sqrt (delta.x * delta.x + delta.y * delta.y);float guk = 0.01 * (len - 100.0); // 100.0 - length gumofPoint gukForce = delta * (-guk / len);

ofPoint g (0.0, 9.8); // Gravity

// 2-th Newton's lawfloat dt = 0.2;

// TODO there must be reliance to real-time between frames

float mass = 0.1;ofPoint force = gukForce + g * mass;ofPoint a = force / mass;velocity + = a * dt;pos + = velocity * dt;

}

Page 38: Lections on Open Frameworks

The text of the "pendulum" testApp.cpp, p. 2 / 3

// Drawvoid testApp:: draw () {

// GumofSetColor (0, 0, 255);// Color blueofLine (pos0.x, pos0.y, pos.x, pos.y); // Draw a line

// CircleofSetColor (255, 0, 0); // Color redofFill ();// Insert a fillofCircle (pos.x, pos.y, 20); // Draw a circle

}

... Next comes the rest of the code – empty functions for processing the mouse and keyboard. We do not need to change it now.

Page 39: Lections on Open Frameworks

Homework

On the basis of "pendulum" do"Branching pendulum”, with pendulums of different weights.It will have a much more interesting dynamics.

Page 40: Lections on Open Frameworks

Projects to do

Choose one of the projects for the independent or collective development (or suggest your own idea):

1. 3D Sculpture Creating2. Flying flowers3. The dynamic mapping on the cube

Page 41: Lections on Open Frameworks

Creating 3-D Sculpture

ScriptThe sheet of paper with a marker out is dropped on the table.The system draws some 3-D body(cube or sphere).

User takes in hand the second markerstarts knocking them in3D-body, beating of his minorparticles.

By rotate the sheet of paper and working with second marker,,you can make 3D sculpture.

TechnologyTo recognize the markers use Augmented Reality technology (example ARToolKit library).The sculpture can be a set of small cubes (50x50x50).

Page 42: Lections on Open Frameworks

Flying flowers

ScriptClassical interactive installation. where the audience waved his hands in front of the camera, and drawn on the screen, such as flower petals.On the screen in a place where waved the audience - the petals scatter in different directions, there appears a picture.After some time the petals fly back into place.The viewer must actively waving his arms to clear the whole picture.

Technology1) Use optical flow and background analysis for analysis of users' movements2) Do rendering - on openFrameworks,either on Unity3D or TouchDesigner (transfer data from openFrameworks through OSC).

Page 43: Lections on Open Frameworks

The dynamic mapping on the cube

ScriptProject images onto cune’s faces. Specificity in that cube can be moved, and the camera (or Kinekt) should track it and synchronize projector images accordingly.Thus, no matter how we moved the cube - it will be illuminated by the projector correctly.

Technology

1) Tracking the cube faces using pattern recognition technology.Camera and projector calibration is needed also.

2) Do rendering on openFrameworks,either on Unity3D or TouchDesigner (transfer data from openFrameworks through OSC).

Page 44: Lections on Open Frameworks

2. Interactive art (Lecture by Ksenia Fedorova)

Page 45: Lections on Open Frameworks

VIDEO ART

• Nam June Paik, Budda Watches TV

• Tony Ousler, Let’s Switch, 1996

Page 46: Lections on Open Frameworks

ELECTRONIC ART

Mark Napier,

Shredder,

1998

Page 47: Lections on Open Frameworks

TELE ART

The Telectroscope lets Londoners and New Yorkers see

each other in real time.

Page 49: Lections on Open Frameworks

VIRTUAL ENVIRONMENTS

• Char Davies, Tree Pond, from Osmose, 1995

Page 50: Lections on Open Frameworks

Home of the Brain. 1990-92.

Monika Fleischmann & Wolfgang Strauss

Page 52: Lections on Open Frameworks

Interactive arts classification

• Интеракция с видео- и кино- изображением

• Интеракция между телом и областью данных (статич./динамич.)

• Интеракция как технически опосредованный диалог между людьми

• Интеракция в виртуальном режиме и гипертекстовой среде

• Интерактивные иммерсивные среды

• Интерактивная архитектура

• и др.

Page 53: Lections on Open Frameworks

Nam June Paik

«Participation TV», 1963

Page 54: Lections on Open Frameworks

Bruce Nauman

«Live-Taped Video Corridor», 1970

Page 55: Lections on Open Frameworks

Jeffrey Shaw, Legible City, CAVE, T-Visinarium,

Page 56: Lections on Open Frameworks

Sommerer/Mignonneau, «A-Volve», 1993 – 1994

http://medienkunstnetz.de/works/a-volve/images/1/

Page 57: Lections on Open Frameworks

WITH ALL SENSESHow can human beings and computers communicate with each other without the use of a mouse or

a keyboard? “With All Senses” spotlights scenarios with great promise for a future beyond therealm of computer screens and cable spaghetti. Bringing your body into play and using your voiceand sense of touch give rise to new modes of interaction between the real world and digital ones. You won't believe your senses as pictures you've drawn yourself come to life and virtual objects

suddenly begin to radiate heat or cold.

Page 58: Lections on Open Frameworks

• SAVEYOURSELF!!!

• 2007 • Hideyuki Ando (JP)

Tomofumi Yoshida (JP)Junji Watanabe (JP)

• Think that nothing can make you lose your equilibrium? Then it’s time for you to try “SaveYourSelf!!!”

You start by using a digital camera to take a self-portrait and then loading to a compact display floatingin a bowl of water. Now, all you have to do is put on a set of headphones with a built-in electrode, pickup the bowl of water, and the action gets underway. The motion of the water is transmitted directly toyour body.

The compact display features an integrated acceleration sensor that measures shifts of the watersurface and sends the data to the electrode in the headphones. It emits a low-voltage current thatstimulates the portion of the inner ear that regulates the sense of balance.

A novel sensory interface based on galvanic vestibular stimulation (GVS) was developed for“SaveYourSelf!!!” Similar procedures are employed in medical tests investigating how well a person’ssense of balance functions. Even a very weak electric current (less than ~1.5mA) can disturb the feelingof equilibrium.

Page 59: Lections on Open Frameworks
Page 60: Lections on Open Frameworks

Дмитрий и Елена Каварга (Москва)

ОСТАТОЧНЫЕ МЫСЛЕПОТОКИинтерактивная инсталляция

Георгий Белоглазов - Аудио

Тимур Щукин - Биообратная связь

«Остаточные мыслепотоки» - прозрачные

двухметровые стеклянные колбы, в торцы которых

встроены активные динамики и приборы биообратной

связи. Зритель входит в контакт с «мыслепотоком»

внутри колбы, - взявшись за металлические спицы с

двух ее сторон. Необходимо успокоить

эмоциональное состояние, замедлив собственный

мыслепоток и добившись максимально возможной

внутренней тишины. В случае синхронизации усилий

скульптура реагирует изменением аудиотрека. В

результате используемых в объектах приборов, в

режиме реального времени измеряется уровень

эмоциональной активации человека, вошедшего в

контакт с интерактивной скульптурой. Метод,

применённый в проекте, называется регистрацией

КГР (кожно-гальванической реакции). В основе

метода лежит феномен сильной корреляционной связи

изменения электрической проводимости

(сопротивления) биологической ткани организма с

уровнем эмоциональной активации.

Page 61: Lections on Open Frameworks

DMITRY KAWARGA, MODEL OF BIPOLAR ACTIVITY

Interaction with anobject of the ModelBipolar Activitystimulates the feelingof equilibrium andharmony through a simple touch of themetal plates thattransfer impulses.www.kawarga.ru

Page 62: Lections on Open Frameworks

• Relax and win! “BRAINBALL”turns the conventional conceptof competition on its head: thevictor isn’t the contestantwho’s most active, but ratherthe one whose brainwavessignal the deeper state ofrelaxation.

Page 63: Lections on Open Frameworks

• Alvaro Cassinelli, The Khronos Projector, 2006

• Nowadays, controls that make it easy and convenient to play films are something

we take completely for granted. On a DVD player, you can record, fast-forward andreverse, or pause on an individual image. Nevertheless, it’s only been possible toview these sequences of shots in one predetermined temporal direction. Now, “Khronos Projector” makes it possible to see a film from a completely new point of

view.

Page 64: Lections on Open Frameworks
Page 65: Lections on Open Frameworks

Muench-Furukawa_Bubbles

Page 66: Lections on Open Frameworks

These paradise types are

endgames of ideological

constructs, whether a vision

of a classless society or a

scientist’s vision of a

sustainable environment.

Current paradises include,

but are not exclusive to:

Allah’s Garden, American

Dream, Communism,

Ecological Earth, Nirvana,

Marek Walczak & Martin

Wattenberg, NO PLACE

(2007-2008)

Page 67: Lections on Open Frameworks
Page 68: Lections on Open Frameworks

George Legrady, SensingSpeakingSpace,

2000-2002

Page 69: Lections on Open Frameworks

Joachim Sauter, Floating.numbers, 2004

Numbers are commonly seen as

quantitative measure of entities.

Depending on the context however,

they often also have religious,

historical, mathematical and

philosophical meanings.

"floating.numbers" attempts to bring

back this often forgotten or unknown

layer of meaning into their

interpretation in the age of digital

determinism.“floating.numbers” is a 9

x 2 meter interactive table on which a

continuos stream of numbers are

floating.

Page 70: Lections on Open Frameworks

Sergey Kotzun

Movement perception

Realization: a video stream from the PC with connected web camera, being transmitted to the

projection screen using the multimedia projector. When viewer appears in the working zone of a

web camera, he sees himself on a projection screen, surrounded by transparent squares (squares

exist only on projection screen). Installed on PC program analyzes movements of a viewer.

When viewer and square become in contact on the screen, a sample corresponding to the square,

as in musical instrumesnts, is playing and a primitive geometrical element being added to the

image on the screen. After the series of contacts the space of the exhibition becomes filled with

sounds and the screen is filled with abstract suprematic compositions.

Page 71: Lections on Open Frameworks

CyberHelmet “TRIP”, CYLAND: Anna Frantz, Marina Koldobskaya, PlegRodionov, Michail Chernov, Olga Rostrosta

Viewer moves through the exhibition hall in a helmet with wireless video glasses. Motion sensors are embedded into helmet. Glasses and sensors are connected withcomputer by radio signal. Sensors catch velocity of viewer’s movement and sendsignals by wireless network to a computer, where they transform into psychedelicvisual pattern. Faster moves the viewer (walks, swirls, dances) , stronger thepsychedelic TRIP.

Page 72: Lections on Open Frameworks

Media Performance DrumpaintingAnna Frantz, Marina Koldobskaya, Michail Chernov

Streams of images are mixed during the session by the means of custom interface, creating some kind of an abstract "animation", which is demonstrated to publicduring performance. Software allows not only mix images, but also change colour andsize of an image on each channel (maximum 4), and use several additional effects(multiplication of an image, change transparency, delay of a previous image, inversionand so on), to get unlimited variations

Page 73: Lections on Open Frameworks

• Interactive Video Installation ALIEN SPACE by AlexandraDementieva

• ‘Alien space’ is presented as a miniature of the human universe: it is amazing, cruel, childish, stupid, beautiful and fragile at the same time. It reacts to alloutside stimuli as well as internal changes – every movement by a visitor triggerseither sound or image and changes its composition.

• The installation consists of 800 balloons forming 2 corridors leading to a centralcircular space. Images of various international television personalities continuallymutating into extraterrestrials and robots are projected onto the balloons. Thesound environment consists of recordings in 67 languages that mix in a Babyloniancacophony.

Page 74: Lections on Open Frameworks

Paul Sermon

«Telematic Dreaming» (1992)

Page 75: Lections on Open Frameworks

Ken Goldberg, Telegarden, 1995-2004

Page 76: Lections on Open Frameworks

Rafael Lozano-HemmerAmodal Suspension / Vectorial Elevation

Page 77: Lections on Open Frameworks

«Стратегии» интерактивного искусства (Р. Клушински):

- Инструмент

- Игра

- Архив

- Лабиринт

- Ризома

- Система

- Сеть

- Спектакль

Page 78: Lections on Open Frameworks

Agnes Hegedüs»Things Spoken« ,1999

Page 79: Lections on Open Frameworks

• 1. Miguel Chevalier• Ultra-Nature - 2006 • Installation de réalité virtuelle interactive.• http://www.miguel-

chevalier.com/site/pages/autr/41/mosafr.htm• 2. Rejane Cantoni/ Leonardo Crescenti• INFINITE CUBED

2007immersive and interactive installationhttp://www.rejanecantoni.com/infinitoaocubo.html

• 3. Rejane Cantoni/ Leonardo Crescenti• SOLAR

2009 (?)immersive and interactive installationhttp://www.rejanecantoni.com/infinitoaocubo.html

• 4. Anne-Sarah Le Meur• EYE OCEAN • 2009• 3D interactive experimental image• http://aslemeur.free.fr/projets/agenda_eng.htm• 5. Rudolfo Quintas• http://www.youtube.com/watch?v=8166KeSZdVA• 6. Laboratório de Luz• Modulador de Luz 2.0 (2006) 3.0 (2008)• http://www.laboluz.org/base_e.htm• 7. Jayoung Bang & Yunjun Lee• Memorandum on Vessels• 2008• www.Raininganimals.net

• 8. Bonnie Mitchell and Elainie Lillios• Encounter(s)• 2007• Audio Visual Interactive Immersive Installation• http://immersiveinstallationart.com/encounters/index.html• 9. Agnes Hegedüs, Bernd Lintermann, Jeffrey Shaw • reconFIGURING the CAVE • 2000 (in the exhibition Media Museum)• 10. Rat tales, 2004• http://www.thegreeneyl.com/rattales• 11. GEORGE LEGRADY• »SensingSpeakingSpace«• 2000 – 2002• http://www.virtualart.at/database/general/work/sensingspeak

ingspace.html• 12. CHRISTA / LAURENT SOMMERER / MIGNONNEAU• »The Living Web «• 2002 – 2002• http://www.virtualart.at/database/general/work/the-living-

web.html• 13. Lawrence Malstaf (BE)Courtesy Galerie Fortlaan 17 – Gent

(BE)• Nemo Observatorium, 2009• http://www.fortlaan17.com/eng/artists/malstaf• 14. Missa di Vocce• 15. Florian Grond• Hear and Now, 2007

http://www.grond.at/index.htm?html/projects/hear_and_now/hear_and_now.htm&html/submenues/submenu_projects.htm

• 16. Chris Solter• http://www.chrissalter.com/projects.php

Page 80: Lections on Open Frameworks

Фестивали медиа искусства, выставки, институции

(http://www.videodoc.ncca-kaliningrad.ru/vebliografija/)Ars Electronica

Art Futura

Artefact Festival

Belluard Bollwerk International Festival

Biennale of Electronic Arts Perth

Boston Cyberarts Festival

Capsula (Science, Art, Nature)

Electrohype (Computer Arts)

Elektra «Digital Art Festival»

Innovation Lab

Institute for the Unstable Media

International Festival for Contemporary Media Art

International Media Art Biennale

International Symposium of Interactive Media Design

ISEA (Inter-Society for The Electronic Arts)

Japan Media Arts Plaza

Los Angeles Center for Digital Art

Machine Project

MLAC Museo Laboratorio DArte Contemporanea

Neruo Show (Caltech Art Show)

New Langton Arts

Strange Attractors: charm between art and science

STRP Art & Technology Festival

The Exploratorium

The SIGGRAPH Art Shows

Transmediale

VIDA (International Competition on Art and Artificial Life)

Virtual Platform

ZKM Media Museum

Page 81: Lections on Open Frameworks

Дополнительные источники

• www.mediaartnet.org

• www.mediaartlab.ru (Тексты на русском языке)

• www.cyland.ru

• http://www.amodal.net/precedents.html

• Архив ссылок, теория (англ. яз.)

• Архив ссылок, примеры по категориям (англ.яз.)

Page 82: Lections on Open Frameworks

3. Two-dimensional graphics

Page 83: Lections on Open Frameworks

Display Settings

in main.cpp:

ofSetupOpenGL(& Window, 1024,768, OF_WINDOW);

1024, 768 - screen sizes, OF_WINDOW - output window.

To display full screen in 1280x1024:

ofSetupOpenGL(& Window, 1280,768, OF_FULLSCREEN);

Page 84: Lections on Open Frameworks

Display Settings

Switching between full screen during the program (in the update ()):ofSetFullscreen(bool fullscreen)

Example: by pressing the '1 '/ '2' - on / off full screen mode:

void testApp:: keyPressed (int key){if (key == '1 ') {ofSetFullscreen(True);

}if (key == '2 ') {ofSetFullscreen(False);

}}

Page 85: Lections on Open Frameworks

Setting the background

ofBackground(int r, int g, int b)

sets the background color (default is 128, 128, 128).Note: must be put in setup (), if enabled ofSetBackgroundAuto.

ofSetBackgroundAuto(bool bAuto)- Turns on / off cleaning mode imagesin each frame before calling draw () (default true).

Page 86: Lections on Open Frameworks

Drawing Shapes

LineofLine(float x1, float y1, float x2, float y2)

Rectangle ofRect(float x1, float y1, float w, float h)

Circle ofCircle(float x, float y, float radius)

Triangle ofTriangle(float x1, float y1, float x2, float y2,float x3, float y3)

Ellipse ofEllipse

Polygon ofBeginShape (), ofVertex (), ofEndShape ()

Smooth curve ofCurve

Page 87: Lections on Open Frameworks

Drawing Shapes

Options:

Drawing Color ofSetColor(int red, int green, int blue), where the numbers from 0 to 255.ofSetColor(int red, int green, int blue, int alpha)

“alpha” is transparency, see below ofSetColor(int hexColor) 0x00ff00 means green

Line ThicknessofSetLineWidth(float lineWidth)line thickness in pixels

Fill / no fill shapeofFill() - FillofNoFill() - Do not fill

Page 88: Lections on Open Frameworks

Text output- A simple text output, without setting the font and size:

ofDrawBitmapString("Some text", 50, 50), // options: text and coordinates

- To derive a normal font and size - to use ofTrueTypeFont:

1) copy bin / data font For example, verdana.ttf (There is a folder openframeworks) 2) declare: ofTrueTypeFont myFont;3) in the setup (): myFont.loadFont ("verdana.ttf", 32 / * size * /);4) in the draw (): myFont.drawString ("Good", 50, 50);

- Output to a text console window:cout <<"Text" <<endl;

Page 89: Lections on Open Frameworks

Example

This is what is called “generative art” and “creative coding”

Page 90: Lections on Open Frameworks

Example

Declaring Variablesfloat px; // top linefloat py;float qx; // indentfloat qy;float col; // color

setup ()ofBackground (255, 255, 255);ofSetBackgroundAuto (false);px = 320;py = 240;qx = 0;qy = 0;col = 0;

Page 91: Lections on Open Frameworks

Example

update ()px + = ofRandom (-1, 1); // ofRandom (a, b) - random value in [a, b]py + = ofRandom (-1, 1);qx + = ofRandom (-0.3, 0.3);qy + = ofRandom (-0.3, 0.3);

if (px <0) px + = 640;if (px>= 640) px -= 640;if (py <0) py + = 480;if (py>= 480) py -= 480;

if (qx <-30) qx + = 15;if (qx> 30) qx -= 15;if (qy <-30) qy + = 15;if (qy> 30) qy -= 15;

col + = 0.02;if (col>= 256) col = col - 256;

Page 92: Lections on Open Frameworks

Example

draw ()int r = col;int g = int (col * 2) % 256;int b = 255 - col;ofSetColor (r, g, b);ofLine (px, py, px + qx, py + qy);

Page 93: Lections on Open Frameworks

Drawing Images

Page 94: Lections on Open Frameworks

CollageCollage (From Fr.collage- Sticking) - technique in visual art, which consists in sticking to a substrate of objects and materials that differ from the basis of color and texture. Collage is also called the product is entirely made in this technique. (Wikipedia)

And here we understand the placement of a collage of various images on the screen.

For a collage for you:-Load a pictures

- Rotation- Transfer, - Change the size,- Transparency.

http://www.chinesecontemporary.com/hong_hao_5.htm

Page 95: Lections on Open Frameworks

Loading and drawing images

ad imageofImage image;

in the setup ()image.loadImage ("texture.jpg"); // Load from disk// File should be in the bin / data

in the draw ()ofSetColor (255, 255, 255); // Why it is needed - see below// “Transparency of the whole picture”image.draw (100.0, 50.0); // Display// Upper left corner will be (100, 50)

This image can be downloaded fromhttp://uralvision.blogspot.com/2010/03/4.html

The original image was taken fromhttp://ayesha5.files.wordpress.com/2008/06/sun-flower2.jpg

Page 96: Lections on Open Frameworks

Rotate image

in the draw ()

ofPushMatrix (); // Remember the transformation matrixofRotate (10.0); // Rotation in degrees of the left upper hand. angleimage.draw (0.0, 0.0); // DrawofPopMatrix (); // Restore matrix

Page 97: Lections on Open Frameworks

Rotation around its center

in the draw ()// Draw rotated, and that the center was at (200.0, 100.0)ofPushMatrix ();ofTranslate (200.0, 100.0);// Center of the pictureofRotate (20.0); // Rotation// Drawing with a shift:image.draw (-image.width / 2,-image.height / 2); ofPopMatrix ();

Page 98: Lections on Open Frameworks

Transparency

Page 99: Lections on Open Frameworks

Transparency for the pixels

To make a good collage of several images, you must remove the black background. This is done by using transparency for image pixels.

Page 100: Lections on Open Frameworks

The use of transparency to the image pixels

If the channels Red, Green, Blue add channel Alpha, you can set the transparency of pixels.

Alpha = 0 - pixel is transparent and invisible,Alpha = 255 - pixel is completely opaque.That is, you can simply cut out the background.

Page 101: Lections on Open Frameworks

The scheme of mixing colors with transparencyTypically, data transparency is stored as a parameter "alpha". This "opacity".

If the value of alpha fragment in [0, 1] (ie, alpha = Alpha / 255.0)the old color C0 blended with the color of a fragment of C1 by the formula

R = (1-alpha) * R0 + alpha * R1G = (1-alpha) * G0 + alpha * G1B = (1-alpha) * B0 + alpha * B1

If alpha = 0 - new color is just the old C0.If alpha = 1 - a new color is the color of a fragment of C1.At intermediate values - the mixing of colors.

If multiple objects overlap - the procedure is performed sequentially, from the far neighbor.

Page 102: Lections on Open Frameworks

The scheme of mixing colors with transparency

Red square impose on black, white and blue.Impose three times with alpha: 0.3, 0.6, 1.0.

Page 103: Lections on Open Frameworks

Methods of obtaining images with a cut-out background

1. of the vector editor2. "Smart" Edges in Photoshop or Gimp 3. hand - poor quality (aliasing, jagged edge)!

Page 104: Lections on Open Frameworks

Image formats, keeping transparency

1. Formats that allow transparency storepng-24 - Best quality / size / speedunpackingbmp-1932, tiff, tga.2. Formats, which store 1-bit transparency (ragged edges):gif, png-8.

3. Does not keep transparency in principle:jpg.

Page 105: Lections on Open Frameworks

Example: rotating sunflowers

// Declare variablesofImage image; // Imagefloat angle = 0.0; // Angle of rotation

// Initialize void testApp:: setup () { image.loadImage ("texture.png"); // Png - with transparencyangle = 0.0;}

// Update the statevoid testApp:: update () {angle + = 0.1; // Rotation

}

Page 106: Lections on Open Frameworks

Example: rotating sunflowers

// Drawvoid testApp:: draw () {// Enable transparencyofEnableAlphaBlending ();

// 2-d option with the exact function for transparency:// GlEnable (GL_BLEND);// GlBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

for (int i = 0; i <5; i + +) {ofPushMatrix (); ofTranslate (300.0 + i * 100.0, 300.0); // MoveofRotate (angle); // RotationofScale (1.0 + 0.2 * i, 1.0 + 0.2 * i);// Increase the size ofimage.draw (-image.width / 2,-image.height / 2);ofPopMatrix ();}ofDisableAlphaBlending (); // Disable transparency// GlDisable (GL_BLEND);}

Page 107: Lections on Open Frameworks

Transparency of the whole image

It is also often used for collage transparency for the whole image (Layer).

Pictured above is a collage, where some of the sunflowers are imposed with transparency. A two sunflowers also made fully transparent (ie invisible), respectively, Red, Blue and Green, Blue channels.

Page 108: Lections on Open Frameworks

Transparency of the whole image

- Accomplishes this by setting the colorofSetColor(R, g, b, a),before drawing the image.The fact that the images are drawn with per-pixel multiplication of the components of the current color that can be conventionally written as follows:

R = r / 255.0 * R0G = g / 255.0 * G0B = b / 255.0 * B0A = a / 255.0 * A0

That is why, to display an image without any changes, useofSetColor(255, 255, 255) before drawing images.

Caution: If there was black, the image will be invisible. This is a fairly frequent "problem"!

Page 109: Lections on Open Frameworks

Transparency of the whole image// Drawvoid testApp:: draw () {float w = image.width;float h = image.height;ofBackground (0, 0, 0);// Set the background colorofEnableAlphaBlending ();// Enable transparency

// Current color affects the output texture// Namely, the texture separately to each multiplied by the R, G, B components// Color and has ignored its transparencyofSetColor (255, 255, 255);// Opaqueimage.draw (w, h);

ofSetColor (255, 255, 255, 128);// Translucent image.draw (w / 2, h / 2);

ofSetColor (0, 255, 0, 128);// Translucent, only the green channelimage.draw (w / 2, h + h / 2);ofDisableAlphaBlending ();// Disable transparency}

Page 110: Lections on Open Frameworks

Result

Page 111: Lections on Open Frameworks

Draw in the buffer

Page 112: Lections on Open Frameworks

How to draw the path of motion of the pendulumObjective: to draft a swinging pendulum to add to paint, where he had been the center of the pendulum. It is as if the pendulum is at the center of a pencil that draws on the paper.

How to do it? If memorize the trajectory as a broken and every time it displays on the screen - the program will work gradually slower and slower.The best way to solutions - to draw the trajectory of the pendulum at some off-screen buffer, and then give the buffer to the screen.

This buffer is - how to display that we do not see. In contrast to the screen buffer will not be cleaned with each drawing the screen.

FBO - Frame Buffer Object

In this buffer you can draw on the screen, and then use the result as a texture - that is, display it as a screen or draw it in another buffer.You can do sophisticated multi-layer images, allowing to make the effects of "trace" of moving objects. To do this, painting a buffer in another, with varying opacity.

Page 113: Lections on Open Frameworks

How to draw the path of motion of the pendulum

Then the algorithm of drawing in the draw () is as follows:1. in the buffer is drawn straight line connecting the current position of the pendulum with the previous2. buffer is displayed on the screen3. drawn on the screen itself the pendulum

Page 114: Lections on Open Frameworks

Working with buffer drawing

To work with the buffers drawing OpenFrameworks best to use the addonofxFBOTexture.

It consists of two files - ofxFBOTexture.h and ofxFBOTexture.cpp.http://addons.openframeworks.cc/projects/list_files/ofxfbotexture

These files must be added to the project, like so:1. Copy them into the src of the project2. in VisualStudio poke right-click on the project in the menu - Add Existing Items and add them both.

Correct copy all the addons folder openframeworks / addons, butit can be uncomfortable if the project should be mobile, that is collected on different computers.

Page 115: Lections on Open Frameworks

Working with buffer drawing

... # Include "ofxFBOTexture.h"

...ofxFBOTexture buffer; // buffer for drawing off-screen

in the setup ()

// Create a bufferbuffer.allocate (ofGetWidth (), ofGetHeight (), false // no autoclean every drawing - since will be there to collect pictures);

buffer.clear (0, 0, 0, 255) // clear black

// We must note that if you choose not black, then the buffer can be painted in the first color you use. How to fix the problem?

Page 116: Lections on Open Frameworks

Working with buffer drawing

in the draw ()

buffer.begin (); // start drawing a buffer

// procedures for drawing into the buffer - Is made as to the screen... ofSetColor, ofLine, ...

buffer.end (); // end drawing to clipboard

buffer.draw (0, 0) // output buffer to the screen

// Else draw...

In this case, the procedure of drawing into the buffer will be from frame to frame (the way: that was a pendulum), and procedures for the rest of the painting - only appear in one frame (itself a pendulum with a rubber band).

Page 117: Lections on Open Frameworks

Homework (*)

Draw a polygon filled with a (textured) with some images.

Hint. Scheme of the function call:

ofTexture tex = image.getTextureReference ();tex.bind ();glBegin (GL_QUADS);glTexCoord2f (...)glVertex2f (...)...glEnd ();tex.unbind ();

Page 118: Lections on Open Frameworks

Appendix:Video recording and publication of

a running program

Page 119: Lections on Open Frameworks

Capture video from screenThe program CamStudio - free software for capturing images from the screen and record a video.http://camstudio.org

For large scale exciting field capture rate may be very low. Do not forget to shoot your project set the Release, not Debug.

Better to use the codec CamStudio Lossless codec, it is fast and does not spoil the image. But the files are large in size. Therefore, before publication, it is better to convert the file using VirtualDub to another codec, for example, XVID.

Page 120: Lections on Open Frameworks

Publish your video

Where to publish: Youtube, Vimeo.

Youtube - the most common, is integrated into many blogs, visible on devices iOS.

Vimeo - video quality is superior to Youtube, so the professional work is often published simultaneously here and on Youtube.

Page 122: Lections on Open Frameworks

What is a shader

1.Shaders now commonly referred to small programs that graphics card is used to change the geometry of objects and the pixels of images during rendering.

2. There are the vertex and fragment (pixel) shaders.

3. Shaders are executed on the graphics card and therefore will not load CPU. Therefore, they can be used to carry out some very interesting and complex transformations and effects.

Page 123: Lections on Open Frameworks

What is a shader

Shaders are small programs written in GLSL.

They can be stored in text files. When starting your application, they will be compiled and stored in video memory.

This is convenient because it can change and customize the shaders without having to recompile the application itself.

Page 124: Lections on Open Frameworks

What do you want to use shaders in OpenFrameworks

In this lecture, we discuss how to use fragment shaders to transform images into openFrameworks.

Required components:

1. Addon ofxFBOTextureDrawing in the buffer:ofxFBOTexture.h, ofxFBOTexture.cpp (see "Drawing in buffer" in a lecture at a two-dimensional graphics).

2. Addon ofxShader loading/unloading of shaders:ofxShader.h, ofxShader.cpp

You can download them, for example,http://playbat-common.googlecode.com/svn/trunk/of/

Page 125: Lections on Open Frameworks

Example 1. Smoothing

http://www.youtube.com/watch?v=Nkr4JiU0sF0

Smoothing here is implemented by combining shifted images with different weights. The radius of the shift will be determined by X-coordinate of the mouse.

(The idea was taken from the example shaderBlurExamplehttp://forum.openframeworks.cc/index.php?topic=2729.0)

Page 126: Lections on Open Frameworks

Text shaderCreate a file fragment shader blur.frag in folder bin/data:

# Extension GL_ARB_texture_rectangle: enable //Setupuniform sampler2DRect src_tex_unit0; //External parameteruniform float blurAmount; //External parameter - the radius of the smoothing

void main (void) // This function will be applied to each pixel{

vec2 st = gl_TexCoord [0]. st; // st - vector of input pixel coordinatesvec4 color; // accumulatorfor (float i = -4.0; i <= 4.0; i + +) {

float weight = 5.0 - abs (i); color + = weight * texture2DRect (src_tex_unit0, st + vec2 (blurAmount * i, 0));

// Get pixel color from the texture src_tex_unit0,//Coordinates x = st [0] + blurAmount * i, //Y = st [1]

}color /= 25.0;gl_FragColor = color; // Set the output pixel color

}

! Be careful: Compiler do not automatically converts float <-> int, writes warning and the shader does not run.

Page 127: Lections on Open Frameworks

Text shaderCreate a file vertex shader blur.vert in folder bin/data:

void main () {gl_TexCoord [0] = gl_MultiTexCoord0;gl_Position = ftransform ();}

This vertex shader, which does not change anything. Just ofxShader for work needed and the file.

Page 128: Lections on Open Frameworks

The text of the application# Include "ofxFBOTexture.h” // Draw the buffer# Include "ofxShader.h" // Shaders

ofxFBOTexture buffer; // BufferofxShader shader; // ShaderofVideoGrabber grabber; // Capture images from the camera

void testApp::setup () {

ofBackground (255,255,255); grabber.initGrabber (640, 480); // start the camera

buffer.allocate (640, 480, true); //true - auto clear background at every step

//Load the shaders from files blur.frag and blur.vert. //At startup, if there are errors in the text of the shader, they will be on display,//With the row number with an error.shader.loadShader ("blur");

}

void testApp::update () {

grabber.update (); // update the pictures with the camera}

Page 129: Lections on Open Frameworks

The text of the application

void testApp::draw () {

// First harvests picture - draw it into the bufferbuffer.begin ();ofSetColor (255, 255, 255);grabber.draw (0, 0);// The output buffer frames from the camera

buffer.end ();

// Turn shadershader.setShaderActive (true);// Set the parameter shaderfloat blurAmount = mouseX/20.0; // mouseX - current coordinate mouseshader.setUniformVariable1f ("blurAmount", blurAmount); // 1f - that is, a scalar of type float

// Draw what you want on the screen, "passing" it through shaderofSetColor (255, 255, 255);buffer.draw (0, 0);

// Disable shader

shader.setShaderActive (false);}

Page 130: Lections on Open Frameworks

Example 2. Magnifier

http://www.youtube.com/watch?v=H-mYdfaku90

“Magnifying glass“ effect appears at the mouse pointer.

(The idea was taken from the example shaderZoomExamplehttp://forum.openframeworks.cc/index.php?topic=2729.0)

Page 131: Lections on Open Frameworks

Text shaderCreate a file fragment shader zoom.frag folder bin/data:

# Extension GL_ARB_texture_rectangle: enableuniform sampler2DRect src_tex_unit0;uniform vec2 circlePos; // Position loopuniform float circleRadius; // Radius of the loopuniform float zoom; // Magnification factor in the loop

void main (void){

vec2 st = gl_TexCoord [0]. st;float relX = st.s - circlePos.x;float relY = st.t - circlePos.y;float dist = sqrt (relX * relX + relY * relY);if (dist <= circleRadius & & dist> 0.0) { // If the pixel in the loop and not the center (as divided by dist)

float newRad = dist * (zoom * dist/circleRadius);float newX = circlePos.x + relX/dist * newRad;float newY = circlePos.y + relY/dist * newRad; gl_FragColor = texture2DRect (src_tex_unit0, vec2 (newX, newY));

}else {

gl_FragColor = texture2DRect (src_tex_unit0, st);}

}In addition, create a file vertex shader zoom.vert by copying blur.vert

Page 132: Lections on Open Frameworks

The text of the applicationThe basis - the text of the previous example.

in the setup ()shader.loadShader ("zoom");

in the draw ()

//Set the parameters of the shader

shader.setUniformVariable2f ("circlePos", mouseX, 480 - mouseY);shader.setUniformVariable1f ("circleRadius", 120); shader.setUniformVariable1f ("zoom", 1.5);

Page 133: Lections on Open Frameworks

Color operations

We learned how to combine colors and pixels to produce a geometric transformation of the image.

How to change the color values:Color - a type of vec4, is a vector of 4 components R, G, B, Alpha,which take values from 0 to 1.

vec4 color = vec4 (1.0, 0.0, 0, 1.0); //red color

color [0],color [1],color [2],color [3] - R, G, B, Alpha - components.

Consider another example.

Page 134: Lections on Open Frameworks

Example 3. Loop with the change of colors

The modified loop from the previous example in which the flowers inside the loop swapped R, G, B components.

Page 135: Lections on Open Frameworks

Text shaderFilezoom.fragfolderbin/data:

replace string

gl_FragColor = texture2DRect (src_tex_unit0, vec2 (newX, newY));

on

vec4 color = texture2DRect (src_tex_unit0, vec2 (newX, newY));

gl_FragColor = vec4 (color [1], color [2], color [0], color [3]); //Mixing color components

Page 136: Lections on Open Frameworks

Homework

1. VortexMake the loop tightened inside the image.

Hint: rotate a vector, depending on the value of dist/circleRadius.

2. Water 1Shader language have sin, cos and atan functions.Make wavy distortion of the image on the screen from the center of the screen.As if dropped something in the water in the middle of the screen.Ie this should be a video, not static image.

3. Water 2With the help of mouse clicks to simulate drop something in the water in the picture.Realize this, for example, by simulating the texture oscillation amplitude waves.

Page 137: Lections on Open Frameworks

5. Interactive sound

Page 138: Lections on Open Frameworks

What is Digital Sound

Page 139: Lections on Open Frameworks

What is the sound at allSound in the broad sense - Elastic waves propagating longitudinally in the environment and pose in her mechanical vibrations;in the narrow sense - The subjective perception of these oscillations special sense organs of animals or humans.Like any wave, the sound is characterized by amplitude and frequency.(Wikipedia)

http://blog.modernmechanix.com/mags/qf/c/PopularScience/9-1950/med_sound.jpg

Page 140: Lections on Open Frameworks

Representation of sound in digital form

The real sound is captured by a microphone, then subjected to an analog-digital conversion.

It is characterized by temporal resolution - sampling, [Procedure - Sampling]amplitude resolution - capacity. [Procedure - quantization]

Amplitude

http://upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Digital.signal.svg/567px-Digital.signal.svg.png

Time

Page 141: Lections on Open Frameworks

Sampling frequency

8,000 Hz - telephone, enough for speech.

11,025 Hz - games, samples for electronic music.

22 050 Hz - the same as 11,025 Hz.

44 100 Hz - many synths and sample libraries. Audio CD.

48 000 Hz - the recording studio, live instruments and vocals. DVD.96,000 Hz - DVD-Audio (MLP 5.1).

192,000 Hz - DVD-Audio (MLP 2.0).

Page 142: Lections on Open Frameworks

CapacityLength - the number of bits used to represent the signal samples in the quantization (in our case - the quantization of the amplitude).

8-bit samples of electronic music.

12-bit studio sound effects.

16 bitscomputer game players, samples, Audio CD.

18-bit studio sound effects

24-bit live sound, vocals,DVD-Audio

32 bitsrepresentation of floating point numbers, so accuracy is not lostfor sounds, so is used for internal processing of sound.

64-bit and floating point audio processing.

Page 143: Lections on Open Frameworks

Representation of sound in memory

Example1 second 16-bit audio with a frequency of 44100 Hz can be represented as a vector

X = (x_1 x_2, ..., ..., x_44100),where 0 <= x_i <= 2 ^ 1.16 = 65535.

Representation of sounds such a way - with the help of the vector - calledPCM(Pulse Code Modulation).It is the most common.It is analogous to pixel representation of images.

Page 144: Lections on Open Frameworks

The fundamental difference between sound and image

Since the images are very convenient to operate at the level of pixels. In particular,

1. two images, we believe the same if their values pixels close.

2. You can change the images based on the values of neighboring pixels (for example, the operation of smoothing).

For audio in PCM format for both possibilities not applicable

Page 145: Lections on Open Frameworks

The fundamental difference between sound and image1. To the 1 st octave 440.00 Hz

2. it also shifted the phase

3. mi 2 nd octave 659.26 Hz

1. + 3.

2. + 3.

(Audacity was used for sound generation)

The last two sound sound the same. And their function amplitude - much different. Thus, chelovecheckoe ear perceives the sound spectrum, ie the composition of its frequency, not amplitude representation of sound.

Page 146: Lections on Open Frameworks

That it is easy / difficult to do "straight" with the sound in PCMEasy:Changing and rearranging the individual samples, without regard to its neighbors- Rearrange the pieces, - Change the volume slices,- Do the reverse - the coup of sound from end to beginning,- To mix several sounds- Mix and change the stereo channels- Do simple compression,- Adding just an echo.Samplers, portastudii and studio program make it masterly.

Hard: Accounting neighboring samples- Compare two sounds at the similarities,- Suppress the low and high frequencies, - Add reverb.

This usually is not done right in the PCM, and by the spectral representation of sound (window Fourier transform).

Page 147: Lections on Open Frameworks

Storage formats of sound

WAVwav = header + bytes of PCMKeeps sound without quality loss (Similar to the images - bmp)

MP3These losses are well suited for storing music.(Similar to the images - jpg)

AMRThese losses, suitable for storage of speech. Used in mobile telephony (2011).(Similar to the images - png)

Page 148: Lections on Open Frameworks

Ways to generate a digital audio

Page 149: Lections on Open Frameworks

Ways to generate a digital audio

That is, how to build PCM-representation of a sound or music:

1. SamplingUsed for the production of the music. Devices - samplers

2. (Subtractive) Synthesis Used mainly for modern electronic music.Devices - keyboards.

3. FM-synthesis

4. Additive synthesis

5. Granular Synthesis

6. S & S - Sample & Synthesis - Sampling, analysis and subsequent synthesis -today one of the best technologies play "live" instruments.

Page 150: Lections on Open Frameworks

SamplingRecording: "Live Sound" - a microphone - ADC - PCM-format.

Playback: PCM-format - DAC - speaker.

Additional options: you can change the playback speed, then increase the tone and speed of the sample.Modern algorithms also enable you to change the tone of the sample without changing its speed, and vice versa.

http://josephdbferguson.com/uploads/akai_mpc1000.jpg

Sampler Akai MPC1000

Page 151: Lections on Open Frameworks

(Subtractive) SynthesisIn precomputer time:a few simple waves (rectangular, sinusoidal, triangular) processed a set of filters (bass, treble, cut the desired frequency). The resultant was going to the speakers.

Now:done digitally.There are difficulties - should carefully consider the problems associated with the digital representation of sound ("aliasing").

http://www.jarrography.free.fr/synths/images/moog_minimoog.jpg

SynthesizerMinimoog

Page 152: Lections on Open Frameworks

Sample playbackin openFrameworks

Page 153: Lections on Open Frameworks

The project "soundscape"

User poke the mouse in different parts of the screen and begins to be heard a soundhttp://www.freesound.org/samplesViewSingle.php?id=221

// Declare variablesofSoundPlayer sample;// Sample playerofPoint p; // point and the radius - to draw a circlefloat rad;

void testApp:: setup () { sample.loadSound ("sound.wav"); // Load sample from the folder bin / datasample.setVolume (0.5f);// Volume [0, 1]sample.setMultiPlay (true);// Allow you to run multiple samplesofSetFrameRate (60) // speed drawing frame ofSetBackgroundAuto (false); // turn off background eraseofBackground (255,255,255);}

Page 154: Lections on Open Frameworks

The project "soundscape"

void testApp:: update () {ofSoundUpdate ();// Update the status of the sound system}

void testApp:: draw () {

// If the sound is played, draw a transparent circleofEnableAlphaBlending ();if (sample.getIsPlaying ()) {// Random colorofSetColor (ofRandom (0, 255), ofRandom (0, 255), ofRandom (0, 255), 20);ofCircle (px, py, rad); }ofDisableAlphaBlending ();}

Page 155: Lections on Open Frameworks

The project "soundscape"

// Clicked the mousevoid testApp:: mousePressed (int x, int y, int button) {

float h = ofGetHeight (); // screen height

// Calculate the desired playback speed of the sample,// In this case 1.0 - is the original sample ratefloat speed = (h - y) / h * 3.0;if (speed> 0) {sample.play ();// Start of a new samplesample.setSpeed (speed); // Set the playback speed

// Remember the point and the radius of the circle to drawp = ofPoint (x, y);rad = (3 - speed);rad = 20 * rad * rad;}}

Page 156: Lections on Open Frameworks

The project "soundscape"

Page 157: Lections on Open Frameworks

Additive synthesis

Additive synthesis based on the construction of sound by summing a set of harmonics (ie, sine waves of different frequencies) with variable volume.

Any sound can be represented with arbitrary accuracy as the sum of a large number of harmonics with varying volume. But in practice, work with a large number of harmonics requires large computational resources. Although, at present, there are several hardware and software additive synthesizers.

Page 158: Lections on Open Frameworks

Project scenario "Additive Synthesizer"

A user on a white background with his hands in front of the camera. Therenharmonics. The screen is divided intonvertical strips, each considered to be the number of pixels, the brightness is less than a certain threshold. This number determines the volume of the corresponding harmonics.

Use n = 20 sinusoidal harmonics with frequencies100 Hz200 Hz...2000 Hz

Harmonics are played with looped samples, which simply changes the volume.

Page 159: Lections on Open Frameworks

Synth Code 1 / 4

// Declare variables

// Video-grabber for "capture" the video framesofVideoGrabber grabber;int w;// Width of the frameint h;// Height of the frame

const int n = 20;// Number of harmonicsofSoundPlayer sample [n];// Samples of harmonicsfloat volume [n]; // Volume of harmonicsint N [n];// Number of pixels in the play harmonica

ofSoundPlayer sampleLoop; // Sample a drum loop

Page 160: Lections on Open Frameworks

Synth Code 2 / 4

// Initializevoid testApp:: setup () {

w = 320;h = 240;grabber.initGrabber (w, h); // Connect the camera

// Grab samples harmonicsfor (int i = 0; i <n; i + +) {int freq = (i +1) * 100;sample [i]. loadSound (ofToString (freq) + ". wav"); // Files are called 100.wav, ...sample [i]. setVolume (0.0);// Volumesample [i]. setLoop (true);// Looping Soundsample [i]. play ();// Start sound}

}

Page 161: Lections on Open Frameworks

Synth Code 3 / 4// Update the statevoid testApp:: update () {grabber.grabFrame (); // grab a frameif (grabber.isFrameNew ()) {// if you come to a new framefor (int i = 0; i <n; i + +) {volume [i] = 0; N [i] = 0;} // Reset the harmonicunsigned char * input = grabber.getPixels (); // pixels of the input imagefor (int y = 0; y <h; y + +) {for (int x = 0; x <w; x + +) {// Input pixel (x, y):int r = input [3 * (x + w * y) + 0];int g = input [3 * (x + w * y) + 1];int b = input [3 * (x + w * y) + 2];int result = (r + g + b> 400)? 0: 1;// Thresholdint i = (x * n / w);// In which to write the result of harmonic volume [i] + = result;N [i] + +;}}// Set the new volume of harmonicsfor (int i = 0; i <n; i + +) {if (N [i]> 0) {volume [i] / = N [i];} // Normalize the volume [0, 1]sample [i]. setVolume (volume [i] / n); // Volume.// Divide by n, otherwise it will be distortion of the output sound}} OfSoundUpdate ();// Update the status of the sound system}

Page 162: Lections on Open Frameworks

Synth Code 4 / 4

// Drawvoid testApp:: draw () {ofBackground (255,255,255); // Set the background colorfloat w = ofGetWidth ();// Screen height and widthfloat h = ofGetHeight ();

ofSetColor (255, 255, 255); // Else draw a picture frame is incorrectgrabber.draw (0, 0, w, h);// Output frame

// Draw the harmonicsofEnableAlphaBlending (); // Enable transparencyofSetColor (0, 0, 255, 80);// Blue color with opacity of 80for (int i = 0; i <n; i + +) {float harmH = volume [i] * h;// Height of the bar harmonics iofRect (i * w / n, h - harmH, w / n, harmH);}ofDisableAlphaBlending ();// Disable transparency}

Page 163: Lections on Open Frameworks

Performance on the “additive synthesizer”

http://www.youtube.com/watch?v=y70Oxk1RAOM

Page 164: Lections on Open Frameworks

Sound synthesisin openFrameworks

Page 165: Lections on Open Frameworks

Introduction

Sound synthesis on openFrameworks performed at the lowest level, "byte".

Therefore it is suitable esperimentalnyh projects with sound.

In complex projects, it is more convenient to use a special type of library SndObj (See enlargement oxfSndObj) or some other program like PureData or Max / MSP,which binds openFrameworks protocol OSC.

Page 166: Lections on Open Frameworks

Program Structure

For sound synthesis conventional structure of the program improves audioRequested (). It is called the sound driver, when you need to fill in another piece of sound buffer sound.

Page 167: Lections on Open Frameworks

Program Structure

in testApp.h, class testApp add:void audioRequested (float * input, int bufferSize, int nChannels);

in the setup () to add:ofSoundStreamSetup (2,0, this, 22050, 256, 4);

/ 2 output channels,// 0 input channels,// 22050 - sampling rate, samples per second// 256 - buffer size// 4 - how to use buffers. Affects the delay. // Buffer size and number of buffers - set the balance between delay and the resulting sound, "Glitter," which occurs when the computer is not fast enough.

Page 168: Lections on Open Frameworks

Program Structure

in testApp.cpp add:

void testApp::audioRequested (Float * output, // output bufferint bufferSize, // buffer sizeint nChannels // number of channels){// Example of sound "white noise" to two channelsfor (int i = 0; i <bufferSize; i + +) {output [i * nChannels] = ofRandomf (); // [-1,1]output [i * nChannels + 1] = ofRandomf ();} }

Page 169: Lections on Open Frameworks

Example

See an example audioOutputExample in OpenFrameworks.

Mouse moves1. up and down - changing the tone of the sound.2. left and right - changing panorama.Mouse click - generated noise.

Page 170: Lections on Open Frameworks

Example of synthesis: RubberGravity

The rubber squares tensile generate sound.

http://www.youtube.com/watch?v=Pz6PO4H1LT0

Page 171: Lections on Open Frameworks

Homework

Using the example audioOutputExample,add to the example of the swinging pendulum of sound generation. Namely:Let the position of the pendulum in the Y sets the pitch, and the position of the pendulum on X - panning.

Page 172: Lections on Open Frameworks

6. Introduction to Computer Vision.

Grabbing and processing camera images

Page 173: Lections on Open Frameworks

What iscomputer vision

Page 174: Lections on Open Frameworks

Definition(From Wikipedia)

Computer vision - Theory and technology of creating machines that can see.

http: //the-gadgeteer.com/wp-content/uploads/2009/12/mr-robot-head-game.jpg

Page 175: Lections on Open Frameworks

DefinitionThe topics include computer vision

- Play action

- Detection of events

- Tracking,

- Pattern recognition,

- Restoration of images.

Page 176: Lections on Open Frameworks

Image examples

Ordinary light, radio waves, ultrasound - they are all sources of images:

1. Color images of the visible spectrum2. Infrared images3. Ultrasound images4. Radar images5. Depth images

Page 177: Lections on Open Frameworks

Image examples

1. Color images of the visible spectrum

http: //rkc.oblcit.ru/system/files/images/% D0% 9F% D1% 80% D0% B8% D1% 80% D0% BE% D0% B4% D0% B013.preview.jpghttp: //imaging.geocomm.com/gallery/san_francisco_IOD032102.jpg

Page 178: Lections on Open Frameworks

Image examples

2. Infrared images

http: //lh6.ggpht.com/_Wy2U3qKMO8k/SSyB6BTdg8I/AAAAAAAACd8/Iai_3QZIjrI/Australia+5+dollars+B+se.jpghttp: //i367.photobucket.com/albums/oo117/syquest/acrylic_no_filter.jpg

Page 179: Lections on Open Frameworks

Image examples

3. Ultrasound images Image with side-scan sonar:

http: //ess.ru/publications/2_2003/sedov/ris6.jpg

Page 180: Lections on Open Frameworks

Image examples

4. Radar images

Snapshot of the radar:

http: //cdn.wn.com/pd/b1/3a/abd9ebc81d9a3be0ba7c4a3dfc28_grande.jpg

Page 181: Lections on Open Frameworks

Image examples

5. Images with depth

http: //opencv.willowgarage.com/documentation/c/_images/disparity.png

Video http: //www.youtube.com/watch?v=pk_cQVjqFZ4

Page 182: Lections on Open Frameworks

First sign of computer vision tasks

The input data are two-dimensional array of data- Ie, "image".

But the two-dimensional arrays of data are used not only in computer vision:

Page 183: Lections on Open Frameworks

Second sign of computer vision tasks

The goal of treatment - extraction and use of color and geometric structures in the image.

http: //www.tyvek.ru/construction/images/structure.jpg

Page 184: Lections on Open Frameworks

Disciplines involved2D-images1. Signal and Image ProcessingLow-level data processing, usually without a detailed study of image content.Objectives - restoration, removal of noise, data compression, improved performance (sharpness, contrast, ...)

2. Computer visionMiddle-level data analysis involves the separation of the image of any objects, and measuring their parameters.

3. Pattern recognitionHigh-level analysis of data - the definition of the type of object. The input data usually must be presented as a set of attributes. Often the signs are used to calculate 1. and 2.

Page 185: Lections on Open Frameworks

Camera for computer vision

- Key Features- Examples of good cameras

Page 186: Lections on Open Frameworks

Key Features

For various processing tasks in real-time need different cameras.

Their main features are:

1. Resolution

2. The number of frames per second

3. Type of data obtained

4. Way to transfer data into the computer

Page 187: Lections on Open Frameworks

Resolution

This is the image size in pixels, obtained from the camera.

http: //www.mtlru.com/images/klubnik1.jpg

320 x 240accuracy

when observing an object the size of 1m:

3.13 mmsize of 30 frames:

6.6 MB

640 x 480accuracy

when observing an object the size of 1m:

1.56 mm

size of 30 frames:26.4 MB

1280 x 1024accuracy

when observing an object the size of 1m:

0.97 mm

size of 30 frames:112.5 MB

Page 188: Lections on Open Frameworks

The number of frames per second

This is the number of images obtained from the camera per second.

30 fps

Time between frames:33 ms

60 fps

Time between frames:16 mS

150 fps

Time between frames:6 ms

Can used for musical instrument

http: //www.youtube.com/watch?v=7iEvQIvbn8o

Page 189: Lections on Open Frameworks

Type of data obtained

What data we get from the camera for processing.

Color or grayscale image of the visible spectrum

Infrared image

Using invisible infrared illumination, this camera will

seein a dark room

(On performance)

Color image + depth(Information about the distance to objects)

Page 190: Lections on Open Frameworks

Way to transfer data into the computer

- Analog-Webcams (USB-camera)- Firewire-camera(Cameras IEEE-1394)- Network (IP-camera)- Smart Camera (Smart cameras)

Page 191: Lections on Open Frameworks

Analog

Historically appeared first,signal is transmitted to analog signals (TV format).

(+) Transmit data over long distances,albeit with interference (100 m)(+) Easy to install, small size

(-) For signal input into the computer requires a special card or TV tuner ", they usually consume a lot of computing resources.(-) "Interlace"Or Interlace - very difficult to analyze the image, if there is movement.(Actually attending 2 half frame, each 50 times/sec)

Page 192: Lections on Open Frameworks

Webcams (USB-camera)

Appeared in ~ 2000.,transmit data via the USB-protocoluncompressed or compressed in JPEG.

(+) Easy to connect computer and software(+) Cheap, available for sale

(-) Overhead - to decode JPEG requires computing resources.(-) The cheapest models are usually bad optics and the matrix (Makes noise in the image)(-) Because of limitations of USB bandwidth can not connect more than 2 cameras to a single USB-hub, but usually on the PC 2-3 USB hub.

Page 193: Lections on Open Frameworks

Firewire-camera (IEEE-1394)

Cameras that transmit a signal protocol FireWire,pylevlagozaschitnom usually the case, usually it is the camera for industrial applications.

(+) Transfer of uncompressed video in excellent quality at high speed(+) You can connect multiple cameras (+) Tend to have excellent optics

(-) High price(-) Requires power, which is sometimes difficult to connect to laptops

Page 194: Lections on Open Frameworks

Network (IP-camera)

Cameras that transmit data onnetwork (wired or wireless)channel. Is now rapidly gainingpopularity in all areas.

(+) Easy connection to PC(+) Ease of installation(+) The possibility of transferring data to an unlimited distance, which allows you to construct a network of cameras covering the building or area, attached to the airship, etc.(+) Control - to rotate the camera, adjust the increase

(-) May have problems with speed of response(-) Is still relatively high price(-) While not portable (2011)

Page 195: Lections on Open Frameworks

"Smart" cameras (Smart cameras)

Cameras, in which caselocated computer.These cameras are fully functionalvision systems,transmitting the output of the detectedfacilities, etc. under different protocols.

(+) Compact.(+) Scalability - it is easy to build a network of such cameras.

(-), Often they require adaptation of existing projects.(-) Cost model is rather slow, so do a good job with only a relatively simple task of image analysis.

Page 196: Lections on Open Frameworks

Separate type: Infrared Camera

Constructed from ordinary camerasby adding an infrared filterand, often, an infrared illuminator.

+ IR-rays are almost invisible man (in the dark can be seen as a faint red color), so often used to simplify the analysis of objects in the field of view.

- Specialized infrared camera suitable for machine vision, not a mass product, so they usually need to be ordered.

Page 197: Lections on Open Frameworks

Examples of good cameras

Sony PS3 Eye

320 x 240: 150 FPS640 x 480: 60 FPS

Data Types:visible lightIR (requires removing the IR filter)

Price: $ 50.

USB, CCD

Page 198: Lections on Open Frameworks

Examples of good cameras

Point Grey Flea3

648 x 488: 120 FPS

Data Type:- Visible light,- IR (?)

Price: $ 600.

Model FL3-FW-03S1C-C IEEE 1394b, CCD

Page 199: Lections on Open Frameworks

Examples of good cameras

Microsoft Kinect

640 x 480: 30 FPS

Data Type:visible light + depth

Price: $ 150.

(Depth - stereo vision using laser infrared illuminator,why not work in sunlight)USB, CMOS

Page 200: Lections on Open Frameworks

Examples of good cameras

Point Grey BumbleBee2

640 x 480: 48 FPS

Data Type:visible light + depth

Price: $ 2000.

(Depth - stereo vision with two cameras)IEEE 1394b, CCD

Page 201: Lections on Open Frameworks

What if you have no webcam?

1. Get the program SplitCamhttp: //www.splitcamera.com/It can simulate the webcam setting as input an arbitrary video file (usually avi).

2. Load avi-file into SplitCam, and then run the project CameraTest, see below.

AttentionEven if SplitCam is off, it is 0-th (default) camera in the system. Therefore, if you turn on a webcam, you project still can show black frames from camera. Solution: select camera 1 in the project’s grabber settings, or uninstall SplitCam.

Page 202: Lections on Open Frameworks

Getting images from camera in openFrameworks

Page 203: Lections on Open Frameworks

Receiving and displaying a frame in OperFrameworks - Draft CameraTest

Preparation of the project:

In the folder openFrameworks/app/examples take the example emptyProject ,copy it to /App/myApps/CameraTest

Page 204: Lections on Open Frameworks

Draft CameraTesttestApp.cpp (1)

# Include "testApp.h"//Declare variables//Video-grabber for "capturing" video frames: ofVideoGrabber grabber; int w; //Width of the frameint h; //Height of the frame

//Initializevoid testApp:: setup () {w = 320; h = 240;grabber.initGrabber (w, h); //Connect the cameraofBackground (255,255,255); //Set the background color}

Page 205: Lections on Open Frameworks

Draft CameraTesttestApp.cpp (2)

//Update the statevoid testApp:: update () {grabber.grabFrame (); //Grab a frame}

//Drawvoid testApp:: draw () {grabber.draw (0,0); //Output frame}

Page 206: Lections on Open Frameworks

Threshold

Page 207: Lections on Open Frameworks

Way of storing images in memory The image is usually stored in memory by transferring its pixels sequentially, row by row. (JPEG, PNG, etc. - it's packed images are stored on a fundamentally different way)

Depending on the type of image one pixel may consist of different number of bytes.1 byte - black and white (monochrome)3 bytes - color (Red, Green, Blue),4 bytes - color with transparency (Red, Green, Blue, Alpha).

Modern GUI uses 4-byte image for images, icons, etc.Input from the camera is in 3-byte format.Image analysis at stages vydelaniya objects is often conducted with a 1-byte images.Important: coordinate axis OY and RGBA sequence may vary depending on the file format.

Page 208: Lections on Open Frameworks

Way of storing images in memory Letunsigned char * image;- Image k bytes per pixel, the size w x h pixels

Then access to the components of pixel (x,y):image [k * (x + w * y) + 0]image [k * (x + w * y) + 1]... image [k * (x + w * y) + k-1].

For example, a pixel (x,y) RGB-image: image [3 * (x + w * y) + 0]- Redimage [3 * (x + w * y) + 1] - Greenimage [3 * (x + w * y) + 2]- Blue

Page 209: Lections on Open Frameworks

Threshold

Threshold allows you to find the pixelsbrightness, ie, (0.2989 * Red + 0.5870 * Green + 0.1140 * Blue) or one color component (Red, Green, Blue) whichgreater than some threshold value.

What you need to conduct treatments:- Access to the pixels a frame for analysis,- To analyze pixels and display the result on the screen.

Page 210: Lections on Open Frameworks

Threshold++++++++ Add to "Declare the variables:

//Process bytes of imageunsigned char * outImage;//Texture to display the processed image ofTexture outImageTexture;

++++++++ Add to the "setup ()":

//Allocate memory for image analysis outImage = new unsigned char [w * h * 3];

//Create a texture to display the result on the screenoutImageTexture.allocate (w, h, GL_RGB);

Page 211: Lections on Open Frameworks

Threshold//Update the statevoid testApp:: update () {grabber.grabFrame (); //grab a frameif (grabber.isFrameNew ()) { //If it came a new frame//Pixels of the input image:unsigned char * input = grabber.getPixels ();//Looping through themfor (int y = 0; y <h; y + +) {for (int x = 0; x <w; x + +) {//Input pixel (x, y):int r = input [3 * (x + w * y) + 0];int g = input [3 * (x + w * y) + 1];int b = input [3 * (x + w * y) + 2];//Threshold via Blueint result = (b> 100)? 255: 0; //Write output image will be black or white:outImage [3 * (x + w * y) + 0] = result;outImage [3 * (x + w * y) + 1] = result;outImage [3 * (x + w * y) + 2] = result;}}//Write to a texture for the subsequent withdrawal of its on-screenoutImageTexture.loadData (outImage, w, h, GL_RGB); }

Page 212: Lections on Open Frameworks

Threshold

//Drawvoid testApp:: draw () {grabber.draw (0, 0) //output frameoutImageTexture.draw (w, 0, w, h); //Output the processing result}

Page 213: Lections on Open Frameworks

Search for color labels

We solve the problem of finding the coordinates of the object is blue in the input frame.

First we find the blue pixels.These are pixels, Blue-channel is substantially greater than their Red and Green-channels. To do this:

>>>>>>>>> Changing string int result ... at:int result = (b> r + 100 & & b> g + 100)? 255: 0;

Page 214: Lections on Open Frameworks

Search for color labels

Thus we have labeled "blue" pixels.Now find the coordinates of their center. For simplicity, we assume that the blue object in frame one. Then we can take the center of gravity of labeled pixels.

Page 215: Lections on Open Frameworks

Search for color labels

++++++++ Add to "Declare the variables:

ofPoint pos; //Coordinates of object

++++++++ Add to "update ()" - calculation of the center of gravity of labeled pixelspos = ofPoint (0, 0);int n = 0; //Number of pixels foundfor (int y = 0; y <h; y + +) {for (int x = 0; x <w; x + +) {int b = outImage [3 * (x + w * y) + 2]; //Look processed imageif (b == 255) { //We have previously labeled as blue dotspos.x + = x; pos.y + = y;n + +;}}}//Display averageif (n> 0) {pos.x/= n;pos.y/= n;}

Page 216: Lections on Open Frameworks

Search for color labels

//Drawvoid testApp:: draw () {//This must be added - can not say why - otherwise the texture is drawn incorrectly:ofSetColor (255, 255, 255);

grabber.draw (0, 0) //output frameoutImageTexture.draw (w, 0, w, h); //output processing result

//Display circle around the objectofSetColor (0, 255, 0); //Green ofNoFill (); //Turn off the fillofCircle (pos.x, pos.y, 20); //Draw a circle on the ref. frameofCircle (pos.x + w, pos.y, 20); //Draw a circle on the Rec. frame

}

Page 217: Lections on Open Frameworks

Search for color labels

Result:

These coordinates can be used to control something. By the way, "n" can be used to control that shot is of interest to us an object.

Page 218: Lections on Open Frameworks

Homework: "Instinct2"

Implement the next interactive project:

1. Take the draft finding color labels, and build an image intens, consisting of pixels characterizing the intensity of the blue, without the threshold processing:int result = b - (r + g)/2; //Variants: b - min (r, g), b - max (r, g)result = max (result, 0); //Result must be in [0 .. 255]intens [3 * (x + w * y) + 0] = result;intens [3 * (x + w * y) + 1] = result;intens [3 * (x + w * y) + 2] = result;

Bring out the image on the screen.

Page 219: Lections on Open Frameworks

Homework: "Instinct2"

2. Position on the screen 20-50 colored "creatures", the initial position and color of which - given randomly. They have mass and velocity. Let be a variety of colors and sizes. Let the size pulsates. These sprites can be drawn in Photoshop translucent brushes of different colors:

Page 220: Lections on Open Frameworks

Homework: "Instinct2"3. Suppose that a move in the direction of maximum intensity of the blue. To this end, the establishment of the coordinates (x0, y0) is the center of mass intens in some of its neighborhood:float mx = 0; float my = 0;float sum = 0; int rad = 100; //Radius of the neighborhood, you may want to do is to depend on //Current size of the establishmentfor (int y =- rad; y <= rad; y + +) {for (int x =-rad; x <= rad; x + +) {if (x + x0> = 0 & & x + x0 <w & & y + y0> = 0 & & y + y0 <h //Of the screen& & X * x + y * y <= rad * rad //Inside a circle of radius rad) {Float value = intens [3 * (x + w * y) + 0];mx + = value * x;my + = value * y;sum + = value;}}}if (sum> 0) {mx/= sum; my/= sum;}

Then mx, my - Coordinate, where necessary to direct the bacteria.Thus, to apply the 2 nd law of Newton, specifying the desired acceleration, the bacteria moved in the right direction.

Page 221: Lections on Open Frameworks

How to make the speed of physics programs are not zasisela of computer power

To speed simulation of physics in the program did not depend on the power of the computer, use the timer:

//Declare variablesfloat time0 = 0, //the last entry in the update ()

in the update ():float time = ofGetElapsedTimef (); //Time from the start of the program in secondsfloat dt = min (time - time0, 0.1);float time0 = time;

//use dt value in your physics!//We take a “min”, because if for some reason the update is delayed//(For example, the user moves the window on the screen)//We zaschischitimsya from being able to dt did not become a great//(Large dt can "blow up items").

Page 222: Lections on Open Frameworks

A note about Release/Debug

Do not forget to enable "Release" when compiling the finished project, it will accelerate the speed of the program.

Page 223: Lections on Open Frameworks

Note

Tasks like threshold processing, noise removal, object detection, contour tracking and other - easier to solve with ready-made procedures implemented in the OpenCV library, connected to OpenFrameworks.

Page 224: Lections on Open Frameworks

7. OpenFrameworksand OpenCV

Page 225: Lections on Open Frameworks

Analysis and change of imageIn openFrameworks you can edit ofImage in a such way:

1. Get image bytesunsigned char * data = image.getPixels ();int w = image.width;int h = image.height;// These are not copied, and delete do not have to!

2. Editbuild a new image data1unsigned char * data1 = new unsigned char * [w * h * 3];.... Fill data1, using data

3. Set image bytesimage.setFromPixels (data1, w, h, OF_IMAGE_COLOR);

// OF_IMAGE_COLOR - 3-byte color.// OF_IMAGE_GRAYSCALE - 1-byte gray.// OF_IMAGE_COLOR_ALPHA - 4-byte, color + transparency.

Page 226: Lections on Open Frameworks

Analysis and change of image

Byte processing - it is usually inconvenient.Better use of special tools.One of the most convenient for today - the library OpenCV.

In openFrameworks have addon - ofxOpenCv.

To date (April 2011) he works withOpenCV 1.x. This version of OpenCV is inconvenient fact that has only a C-interface.

Therefore, we study a more convenient OpenCV 2.xand we use it, without the addon ofxOpenCv.

Page 227: Lections on Open Frameworks

OpenFrameworks and OpenCV

In OpenCV image given by the class Mat.

How to convert ofImage in Mat and back to the example of 3-channel images:

From openFrameworks in OpenCV

ofImage image; //-let it is, 3-channel...Mat imageCV (cv:: Size (image.width, image.height),CV_8UC3,image.getPixels ());

From OpenCV in openFrameworks

Mat imageCV; //-the way it is, 3-channel...image.setFromPixels ((unsigned char *) IplImage (imageCV). imageData,imageCV.size (). width, imageCV.size (). height,OF_IMAGE_COLOR);

Page 228: Lections on Open Frameworks

Introduction to OpenCV

- What is OpenCV- The first project to OpenCV- Class Mat- Image processing functions

Page 229: Lections on Open Frameworks

What is OpenCV

"Open Computer Vision Library"

Open library with a set of functions for processing, analysis and image recognition, C / C + +.

Page 230: Lections on Open Frameworks

What is OpenCV

2000 - First alpha version, support for Intel, C-interface

2006 - Version 1.0

2008 - Support Willow Garage (lab. Robotics)

2009 - version 2.0, classes in C + +

2010 - version 2.2, realized work with the GPU

Page 231: Lections on Open Frameworks

The first project to OpenCV1. Creating a Project

We assume that Microsoft Visual C + + 2008 Express Edition and OpenCV 2.1 is already installed.

1. Run VS2008

2. Create a console projectFile - New - Project - Win32 Console Application,in the Name enter Project1, click OK.

3. Set up the pathAlt + F7 - opens the project propertiesConfiguration Properties - C / C + + - General - Additional Include Directories,where we put the value "C: \ Program Files \ OpenCV2.1 \ include \ opencv";

Linker - General - Additional Library Directories, where we put the value ofC: \ Program Files \ OpenCV2.1 \ lib \

Linker - Input - Additional Dependencies -cv210.lib cvaux210.lib cxcore210.lib cxts210.lib highgui210.lib for Release,cv210d.lib cvaux210d.lib cxcore210d.lib cxts210.lib highgui210d.lib for Debug

Page 232: Lections on Open Frameworks

The first project to OpenCV2. Reading the image and display it on screen

1. Preparing the input data:file http://www.fitseniors.org/wp-content/uploads/2008/04/green_apple.jpgwrite in C: \ green_apple.jpg

2. Writing in Project1.cpp:# Include "stdafx.h"# Include "cv.h"# Include "highgui.h" using namespace cv;

int main (int argc, const char ** argv){Mat image = imread ("C:\\green_apple.jpg");// Load image from diskimshow ("image", image); // Show imagewaitKey (0); // Wait for keystrokereturn 0;}

3. Press F7 - compilation, F5 - run.The program will show the image in the window and by pressing any key will complete its work.

Page 233: Lections on Open Frameworks

The first project to OpenCV3. Linear operations on images

Replace the text in the main from the previousfor example:

int main (int argc, const char ** argv){Mat image = imread ("C:\\green_apple.jpg");

// Image1 pixel by pixel is equal to 0.3 * image Mat image1 = 0.3 * image;imshow ("image", image);imshow ("image1", image1);waitKey (0);return 0;}

Page 234: Lections on Open Frameworks

The first project to OpenCV4. Work with rectangular subdomains image

Replace the text in the main from the previous example to:

int main (int argc, const char ** argv){Mat image = imread ("C:\\green_apple.jpg");

// Cut of the pictureRect rect = Rect (100, 100, 200, 200); // Rectangle cutMat image3;image (rect). copyTo (image3); // Copy of the image imshow ("image3", image3);

// Change the part of the picture inside the pictureimage (rect) *= 2;imshow ("image changed", image);

waitKey (0);return 0;}

Page 235: Lections on Open Frameworks

Mat Class

Mat - Base class for storing images OpenCV.

Page 236: Lections on Open Frameworks

Mat ClassSingle-and multi-channel images

The image is a matrix of pixels.Each pixel can store some data.If the pixel stores the vector data, the dimension vector is number of image channels.

1-channel image - also called the half-tone3-channel images - typically consist of three components (Red, Green, Blue).

Also, OpenCV can be used 2 - and 4-channel image.

Page 237: Lections on Open Frameworks

Mat ClassCreating images

1) Let the picture without some type of

Mat imageEmpty;

2) Image w x h pixels, the values 0 .. 255(8Umeans "unsigned 8 bit",C1means "a channel"):

int w = 150; int h = 100;Mat imageGray (cv:: Size ( w, h ) CV_8UC1 );

Page 238: Lections on Open Frameworks

Mat ClassCreating images

3) 1-channel with the floating-point values(32F means "float 32 bit"):

Mat imageFloat (cv:: Size (w, h), CV_32FC1 );

4) 3-channel image with values 0 .. 255 for each channel:

Mat imageRGB (cv:: Size (w, h), CV_8UC3 );

Page 239: Lections on Open Frameworks

Mat ClassMemory management

1. Memory for the image stands out and is automatically cleared

That is, OpenCV itself creates the image of the desired size and type, if this image is an output parameter of a function:

Mat imageFloat;imageGray.convertTo (imageFloat, CV_32FC1, 1.0 / 255.0);

- Here OpenCV itself allocate imageFloat.It is important that if your image is already the right size, there are no operations on memory allocation is performed.

2. Assignment operator shall not copy the data (as does the std:: vector), and not by copying pointers, and using mechanism of the reference count.

Page 240: Lections on Open Frameworks

Mat ClassMemory managementThe mechanism of the reference count (In STL is a shared_ptr, in Java it is all signposts) works like this:{Mat A (cv:: Size (100, 100), CV_8UC1); // Allocate memory for the image, and the memories,// This memory is a single image.{Mat B = A;// Here the memory for the image does not stand out, but simply// Data in B point to the same area in memory.// Therefore, if we change B, then changed, and A.// Reference count increased by an image, was equal to 2.}// Here B came out of scope, the reference count is decreased,// And became equal to 1.}// Here A came out of scope, the reference count becomes equal to 0,// And the memory allocated to it are automatically cleared.

Page 241: Lections on Open Frameworks

Mat ClassMemory management

Since the operationMat B = A;does not copy the image A to B, then in order to create a copy of the image for subsequent independent use, you must use explicit commands copyTo andclone:

image1.copyTo (image2);image3 = image1.clone ();

Page 242: Lections on Open Frameworks

Mat ClassMemory management

Outcome:1) an assignment Mat B = A; is very fast, and does not copy the data and adjusts the pointers in a special way to them. This allows you to transfer Mat in the function directly, without pointers and references. This will not cause unwanted copying Mat the stack (as it would stalal std:: vector).

Although, of course, const Mat & will be transmitted still faster.

2) to copy the images to use explicit commands copyToand clone.

Page 243: Lections on Open Frameworks

Mat ClassPer-pixel access to images

In OpenCV has several ways of per-pixel access to images. They vary in the degree of security (typing and go beyond the border), the speed and convenience.

Wherever possible, you should try to avoid direct references to the pixels, but instead use the functions of OpenCV, since they usually work faster and the code more understandable.

Page 244: Lections on Open Frameworks

Mat ClassPer-pixel access to images

One way to access the pixels for images that have known the type - the use of theat. For single-channel images 0 ... 255 it is:

// Get valuesint value = imageGray.at <uchar> (y, x);

// Set the valuesimageGray.at <uchar> (y, x) = 100;

Note that x and y in the call are swapped.

Page 245: Lections on Open Frameworks

Mat ClassConversion types

NoteIn the derivation of the on-screen images with floating-point OpenCV means we must bear in mind that they are displayed on the assumption that their values lie in [0,1]. Therefore, when converting 8-bit images in an image float to do the transformation - the multiplication by 1.0 / 255.0.

To convert images of different types of bit mode (float and unsigned char) used a class member convertTo.In its second argument - the type of the image.

imageGray.convertTo (imageFloat, CV_32FC1, 1.0 / 255.0);

The number of channels input and output must match!

Page 246: Lections on Open Frameworks

Mat ClassConversion types

For converting different color spaces using the function cvtColor. If necessary, it can change the number of channels.

For example, the conversion of 3-channel RGB-image to grayscale:

cvtColor (inputRGB, outputGray, CV_BGR2GRAY);

on the contrary:cvtColor (inputGray, outputRGB, CV_GRAY2BGR);

Page 247: Lections on Open Frameworks

Mat ClassPartition of the channels

Function split divides the multi-channel image into channels.Functionmergestitches together a single-image multi-channel.

voidsplit(Const Mat &mtx// Original color imagevector <Mat> &mv// Result set is 1-channel// Images)

voidmerge(Const vector <Mat> &mv// Initial set of 1-channel// ImagesMat &dst// The resulting color// Image)

Most often they are used to separately to each color image processing, as well as for various manipulations of the channels.

Page 248: Lections on Open Frameworks

Mat ClassPartition of the channels

int main (int argc, const char ** argv){Mat image = imread ("C:\\green_apple.jpg");

// Split the original image into three channels

// - Channels [0], channels [1], channels [2]

vector <Mat> channels;split (image, channels);

// Show the channels in separate windows // Note that the red channel - 2, not 0.

imshow ("Red", channels [2]);

imshow ("Green", channels [1]);imshow ("Blue", channels [0]);waitKey (0);return 0;

}

Page 249: Lections on Open Frameworks

Image processing functionsSmoothing

FunctionGaussianBlurperforms image smoothing Gaussian filter.

Most often, the smoothing is applied to remove small noise on the image for subsequent image analysis. Is done by using a filter of small size.

Original image The image, smoothed box 11 x 11

http://www.innocentenglish.com/funny-pics/best-pics/stairs-sidewalk-art.jpg

Page 250: Lections on Open Frameworks

Image processing functionsThreshold

Functionthresholdperforms threshold processing of the image.

Most often it is used to highlight objects of interest pixels in the image.

http://www.svi.nl/wikiimg/SeedAndThreshold_02.png

Page 251: Lections on Open Frameworks

Image processing functionsFill areas

FunctionfloodFillprovides a fill area, starting from a pixel (x, y), with specified boundaries shutdownusing a 4 - or 8 - adjacency pixels.

It is important: It spoils the original image - as it fills.

Most often it is used to highlight areas identified by the threshold processing, for subsequent analysis.

http://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/Square_4_connectivity.svg/300px-Square_4_connectivity.svg.pnghttp://tunginobi.spheredev.org/images/flood_fill_ss_01.png

Page 252: Lections on Open Frameworks

Image processing functionsIsolation circuits

The contour of the object - this is the line representing the edge of the object's shape.Underline the contour points -Sobel, Leased-line circuit -Canny.

Application1. Recognition. Along the contour can often determine the type of object that we observe.

2. Dimension. With the circuit can accurately estimate the size of the object of their rotation, and location.

http://howto.nicubunu.ro/gears/gears_16.pnghttp://cvpr.uni-muenster.de/research/rack/index.html

Page 253: Lections on Open Frameworks

Sample project on OpenCV:Search for a billiard ball

Page 254: Lections on Open Frameworks

Problem

The image of billiard field to find the coordinates of the centers of billiard balls.

Algorithm:

1. Threshold find bright pixels.2. Analysis of areas. The method of casting we find connected regions,Among them we find such dimensions that allow it balls.

Page 255: Lections on Open Frameworks

1. ThresholdProblem - the image of billiard field highlight the pixels that are not field (shooting conditions such that the field - dark)

Mat image = imread ("C:\\billiard.png"); // load the input imageimshow ("Input image", image);vector <Mat> planes;split (image, planes);Mat gray = 0.299 * planes [2] + 0.587 * planes [1] + 0.114 * planes [0];

double thresh = 50.0; // The threshold is chosen empiricallythreshold(Gray, gray, 50.0, 255.0, CV_THRESH_BINARY);

imshow ("Threshold", gray);

Page 256: Lections on Open Frameworks

threshold - an example of application

Please note: We have identified just pixels "not" field. To find the coordinates of the centers of balls and cue position - requires further processing.

Page 257: Lections on Open Frameworks

2. Analysis areas

floodFill- Allocation of connected regions

morphological operationsdilate - dilation(*)erode - Erosion(*)

Page 258: Lections on Open Frameworks

floodFill - description

Function floodFillprovides a fill area, starting from a pixel (x, y), with specified boundaries shutdownusing a 4 - or 8 - adjacency pixels.

Important: it spoils the original image - as it fills.

1. Most often it is used to highlight areas identified by the threshold processing, for subsequent analysis.

2. It can also be used to remove small noise on the binary image (in contrast to the "erosion + dilation" - do not spoil the boundaries of larger areas).

3.If enhance overall box found in the area of 1 pixel on all sides and make the fill, the way you can eliminate the internal hole in the area.

Page 259: Lections on Open Frameworks

floodFill - a list of options

Announcement and description of the parameter list:

int floodFill(Mat & image, Point seed, Scalar newVal, Rect * rect= 0 Scalar loDiff= Scalar (), Scalar upDiff= Scalar (),int flags= 4)image - The input image, 1 - or 3-channel, 8 or 32-bit.seed - Pixel, from which to start pouringrect - Bounding box found by the fieldloDiff, UpDiff - allowable difference with its neighbors(Or - with embryonic pixel, if flags | = FLOODFILL_FIXED_RANGE)that is, a new pixel must satisfy valueNewvalue - loDiff <= valueNew <= value + upDiff.flags = 4 or 8 - connectivity.

The resulting value - the number of pixels in the flooded area.

Page 260: Lections on Open Frameworks

floodFill - a list of options

Note about the types of OpenCV:

Point - Integer point to the fields int x, y;Rect - A rectangle with integer fieldsint x, y, width, height;Scalar - The representation of color,For example, Scalar (255) - 1-channel colorScalar (255, 255, 255) - 3-channel color

Page 261: Lections on Open Frameworks

2. Analysis areas

Problem - the image of billiard glades find billiard balls - ie. compute their centers and sizes. The idea - using the example of the result threshold, through all connected regions with floodFill, and the found areas to consider those balls whose sizes lie in the pre-defined boundaries.

const int minRectDim = 25; // Max and min size of the ballsconst int maxRectDim = 35;

// Iterate over the image pixelsfor (int y = 0; y <gray.rows; y + +) {

for (int x = 0; x <gray.cols; x + +) {int value = gray.at <uchar> (y, x);if (value == 255) {// If the value of - 255, fill it 200

Rect rect;// Here is written Bounding Boxint count = floodFill(Gray, Point (x, y), Scalar (200), & rect);

Page 262: Lections on Open Frameworks

Analysis areas// Check size

if (rect.width> = minRectDim && rect.width <= maxRectDim&& Rect.height> = minRectDim && rect.height <= maxRectDim)

{ // Centerint x = rect.x + rect.width / 2;int y = rect.y + rect.height / 2;// Radiusint rad = (rect.width + rect.height) / 4;// Draw a circle the thickness of 2 pixelscircle (image, Point (x, y), rad, Scalar (255, 0, 255), 2);

}}

}}

imshow ("out", image);

Page 263: Lections on Open Frameworks

floodFill - example of application

Page 264: Lections on Open Frameworks

Comments

In this example, we considered the simplest method for finding the ball in the picture - by analyzing the sizes of bounding boxes.

Such an analysis works on the assumption that the image no other sites with similar bounding boxes.

For a real application, a more detailed analysis of areas.This is primarily due to the fact that if the balls are near each other, then they can "stick together" in one connected region.

Possible approaches to solving this problem:

1. To fill the interior area, select the path obtained by field and assess its areas of convexity and concavity for the selection of balls.

2. Use template "round", which is applied to the obtained area and look for the best of its location.

Page 265: Lections on Open Frameworks

Debugging in OpenCV

To debug the project, which is being processed by OpenCV, very useful to display intermediate images by usingimshow

imshow ("image", image);- It displays the image in the image window with the heading "image".

Warning:1. need # Include "highgui.h"2. if you display images in the window with the same name, then only the last image will be visible.

Page 266: Lections on Open Frameworks

Homework

Do a project on openFrameworks, which1) receive a picture from the camera

2) then this picture is transmitted in OpenCV, where she- Smoothed- Is the threshold processing

3) The picture with the camera and the resulting image is displayed on the screen using openFrameworks.

Page 267: Lections on Open Frameworks

8. Communicating

with other programs via OSC

http://profile.ak.fbcdn.net/hprofile-ak-snc4/23268_357220159002_4989_n.jpg

Page 268: Lections on Open Frameworks

Why is communication is neede

Multimedia and management in different systems are implemented with varying degrees of elaboration. Therefore, a complex project may be more convenient and easier to implement on the basis of several systems. They can work on one computer or on computers of the network.

Page 269: Lections on Open Frameworks

OSC protocol

Protocol OSC - Network protocol based on UDP."Open Sound Control"

+ Low latency in transmission, because UDP (not TCP / IP)- Packets can be lost, so the data is better to send a certain frequency, in small portions.

Page 270: Lections on Open Frameworks

OSC protocol

Page 271: Lections on Open Frameworks

Other ways

In addition to OSC, you can use TCP-sockets, for communicating with Flash (via XMLSockets).

In openFrameworks -addon:ofxNetwork

Example:networkTcpServerExample,networkTcpClientExample

Page 273: Lections on Open Frameworks

Types of sensors

Consider just a few

Page 275: Lections on Open Frameworks

Infrared motion detectors

Activeis the emitter and receiver.

acroname Sharp IR (Russia has not yet found) - measures the distance also.

Autonics BA2M-DDT - the type of relay, gives a binary signal if an object is closer to some distance.

Problem - if the object is black, it can not work because they rely on that object reflects light. Indicator of the distance depends on the color of the object.

PassiveIspolzutsya in security systems.Based on measurement (thermal) infrared radiation from objects.

Page 276: Lections on Open Frameworks

Ultrasonic distance sensor - Sonar

Maxbotix LV-MAXSONAR-EZ1

Compact, inexpensive, used by amateur enthusiasts, in particular, for experimental robotics.Challenges to the use of interactive systems - not accurate, and will not work behind the glass.

Principle of operation - sends an ultrasonic signal and measures the time after which comes the reflected signal.

Page 277: Lections on Open Frameworks

Sensor connection with Arduino

Page 278: Lections on Open Frameworks

General descriptionArduino - A hardware computing platform whose main components are a simple I / O board and development environment for language Processing / Wiring.Arduino can work as stand-alone microprocessor board.Can also connect via USB to your PC and integrate with the software running on your computer.For example, Flash, Processing, Max / MSP, Pure Data, SuperCollider, OpenFrameworks.

EquipmentArduino board consists of a microcontroller Atmel AVR, as well as binding elements for programming and integration with other schemes.

ProgrammingArduino to create an integrated development environment for Java, which includes a code editor, compiler, and transmission module firmware to the board. Runs immediately without installation.

Page 279: Lections on Open Frameworks

Freeduino - Arduino clone

Although the documentation for the hardware and software code published under license «copyleft», developers have expressed the wish that the name «Arduino» (and its derivatives) has been a trademark for the official product and is not used for derivative works without permission. An official document on the use of the name Arduino emphasized that the project is open to everyone working on the official product.

The result was protection of the name of the branch board version Arduino Diecimila, made by a group of users, which led to the production of an equivalent payment, entitled Freeduino. Name Freeduino is not a trademark, and may be used for any purpose.

Freeduino 2009 - Full analogue Arduino Duemilanove.

Page 280: Lections on Open Frameworks

Freeduino 2009

• Microcontroller: ATmega168 (ATmega328) • Digital I / O ports: 14 ports (6 of them with a PWM signal) • Analog input ports 6 ports • EEPROM (Flash Memory): 16 K (32 K), 2 of them to use the loader • RAM (SRAM): 1 KB (2 KB) • ROM (EEPROM): 512 bytes (1024 bytes) • Clock Speed: 16MHz • PC Interface: USB • Power from USB, or from an external source, the choice of automatic

http://www.freeduino.ru

Page 281: Lections on Open Frameworks

A program to built-in LED flash

Run Arduino.exe, then select the menu File - Examples - Digital - Blink

int ledPin = 13; / / LED connected to digital pin 13 - it also takes on vstr.lamp./ / The setup () method runs once, when the sketch startsvoid setup () {/ / Initialize the digital pin as an output:pinMode (ledPin, OUTPUT);}

/ / The loop () method runs over and over again,/ / As long as the Arduino has powervoid loop (){digitalWrite (ledPin, HIGH); / / Set the LED ondelay (1000);/ / Wait for a seconddigitalWrite (ledPin, LOW); / / Set the LED offdelay (1000);/ / Wait for a second}

Page 282: Lections on Open Frameworks

A program to blink an external LED

Connect the LED to the digital output 12, and AREF (He has power) (In terms of electronics is not very well - must have a resistor, but for example, normal)

In the previous program to change the line:int ledPin = 12;

Page 283: Lections on Open Frameworks

Arduino is working here as a standalone device. You can turn off the USB-cable and connect an external power source.

But we are more interested in how to enter data for use in OpenFrameworks. Deal with that.

A program to blink an external LED

Page 284: Lections on Open Frameworks

How do I transfer data from Arduino to PC - Serial Port.

Arduino communicates with the PC via a serial data COM-port. We hooked up via USB - in this case, the COM-port is emulated.

Connect to the Arduino and the sonar will simply print to the port of the incoming values.

In the programming environment for Arduino is an opportunity to watch the data from the port, and send back the value.

Page 285: Lections on Open Frameworks

Connecting to an analog input sonar Arduino

3 pins - power, land, and analogue input 0.

Page 286: Lections on Open Frameworks

Connecting to an analog input sonar Arduinoconst int analogInPin = 0; / / Analog input pin that the potentiometer is attached toint sensorValue = 0;/ / Value read from the port

void setup () {/ / Initialize serial communications at 9600 bps:Serial.begin (9600); }

void loop () {/ / Read the analog in value:sensorValue = analogRead (analogInPin);

/ / Print the results to the serial monitor:Serial.print ("sensor ="); Serial.println (sensorValue); / / Wait 10 milliseconds before the next loop/ / For the analog-to-digital converter to settle/ / After the last reading:delay (10); }

Page 287: Lections on Open Frameworks

Connecting to an analog input sonar ArduinoCompiling and uploading the program into the device,need to press the Arduino IDE "Serial Monitor",and read a list of values....sensor = 50sensor = 50sensor = 143sensor = 143sensor = 140sensor = 140sensor = 50sensor = 48...

NOTE: Increasing the accuracy of the measurement sonarIf the Setup () to add analogReference (INTERNAL); will be measured more accurately -as compared with no 5 and 1.1V (Other values - DEFAULT = 5V, EXTERNAL - External, compared with AREF)

Page 288: Lections on Open Frameworks

Firmata protocol

We have considered an example of how to use the class ofSerial can work with serial port.This allows you to communicate with the Arduino.This is usually done simultaneously - the Arduino send a control signal (eg, random 1 byte), and in return expect a packet of data.

Inconvenience: For each new configuration of sensors to be reprogrammed and debugged Arduino and process data in OpenFrameworks.

Solution: Use protocol Firmata designed specifically to simplify data input / output.And on the side of the Arduino program, it does not change all the settings are made via OpenFrameworks.

Page 289: Lections on Open Frameworks

Firmata protocol

Firmata is a generic protocol for communicating with microcontrollers from software on a host computer. It is intended to work with any host computer software package. Right now there is a matching object in a number of languages. It is easy to add objects for other software to use this protocol. Basically, this firmware establishes a protocol for talking to the Arduino from the host software. The aim is to allow people to completely control the Arduino from software on the host computer. http://firmata.org

Class ofArduino in OpenFrameworks uses Firmata, and has simple commands to connect to the Arduino and input-output data port with Arduino, for example:

int getAnalog (int pin)- Read data from the analog input pin numberint getDigital (int pin)- Read data from digital input pin number

To use it, should be put in the work program on the Arduino Firmata, see below.

Page 290: Lections on Open Frameworks

Example OpenFrameworks - Firmata -Arduino

1. On the side of the Arduino: download softwareFirmata - StandartFirmata2. On a side OpenFrameworks:run the example FirmataExample, pre-specifying it in the line port to use(This may be COM4, COM6, COM7, etc): ard.connect ("COM6', 57600);

(3. If you are working with the same sonar - is in the Arduino - Setup () to add analogReference (INTERNAL); to improve the accuracy of measurements, see note above).

How do I know the port - at the start of the program writes in the debug window a list of available ports.If the list of ports on the screen is not taken out, then when the device can see it in Device Manager.

Page 291: Lections on Open Frameworks

Example data OpenFrameworks -Firmata - Arduino

Page 292: Lections on Open Frameworks

The project with the increasing square

Objective: To study the accuracy and stability measurement sonar.

Based on the example firmataExample do a project that draws a square whose size depends on the measured distance.

Page 293: Lections on Open Frameworks

Project: sound generator with the sonar

The idea: to use the values to change the height of the sonar-generated sound.

1. Sound generation is recommended to do by example audioOutputExample.

2. The data coming from the sonar is smooth.The simplest filter:filtered = f * value + (1-f) * filtered.

where filtered - smoothed value, value - the new value, f - coefficient filter, from 0 to 1, for example, 0.2 or 0.01.