Jeanyoon Choi

← Back to Main

Conducting Screens: A possible approach to Multi-Device Web Artwork

3/20/2024, 3:29:24 PM | Jeanyoon Choi

Conducting Screens: A possible approach to Multi-Device Web Artwork
Original Notes (Pre-LLM)

Think of the case: One Mobile, Multiple Screens. The audience controls the mobile device to alternate the contents spread across the screens.

Lots of options in the contemporary use of mobile devices… Scroll, Touch, Pinch, lots of finger-based gestures using the touch interface, but what is lacking?

More phenomenological approach?

As many mobile devices nowadays have accelerometers and motion sensors, why not use this feature?

This is like an act of conducting… Conducting, with the mobile phone. It perfectly resonates with the act.

Beyond using just ‘fingers’, as was the case for our daily mobile interaction… Now it’s time to expand to the whole ‘hand’. A beautiful, poetic, phenomenological transition from the finger to the hand.

And what more? Then just the act of using hand and mobile sensor from the mobile input end? → We have the multi-devices output on the screens side (screens can be laptop/projector, but the key factor is that it’s in multi-device format)

Just like an experience of orchestration, the conducting → Condutor never conducts a single instrument, they manage the whole rhythm made from number of instruments that are placed spanning across the space..

Similarly, once we ‘conduct’ on the mobile, the output devices (now I’m thinking this can be also mobile devices, don’t have to limit the scope to just computer, but also think the price per device) now react correspondingly, creating harmonic audio-visual, which are placed around the space… The input conduction - output device audio visual harmonically resonates.

Now the real problem: What kind of contents are altered within the act of conduction? We have a sort of an overall form: User conducting via mobile, Screens reacting correspondingly. Now the problem is, What will be the context? What will be the specific use cases? What will be the potential narrative / storytelling / experience-telling?

What will be the trackable input?

Javascript device motion/device acceleration

What can this induce?

A. 3D Scene, scene transforming/rotating correspondingly to the motion?

Scene view itself might rotate

Or the object might be morphed accordingly

Or it can be both

B. Spatial Audio: Audio somehow increases / decreases correspondingly to the acceleration?

C. More primitively: Screen turning on/off, white ←> black based on the acceleration?

Kind of speculating the primitive approach

Can there be potential intersections with the projection mapping?

D. How about morphing the 2D graphics? Speculating Fluid Interfaces?

E. How about morphing screens full of texts/numbers? How about morphing maps?

F. How to harmonically combine the motion and the acceleration data and resonate with the both?

G. Maybe it can be very poetic… Narrative-based, storytelling-based, very subtle, the texts wandering around… As the user conducts, some kind of melody (audio-visual melody) continously generates → Kind of story emerges from these multiple screens. More referencing to the primitive orchestra form of the orchestration conducting.

Also, can there be a derivate form?

A. Why just only mobiles? What about mouse? Doing the same thing with the mouse might be also poetic, and a bit retro (might be more artistic references there). A couple of interesting web art might have tried this, but their output was not really multi-device (It is usually limited to single screen/tab output). How can this be expanded towards multi-device?

B. What about having two or more audiences? Interfering with each other? Two or more mobile phones? Can use the difference and the resonance, and the tension generated from them? Remember to use the sine/cosine-shaped wave functions when tracking these systems. What about when the number of audiences are way much higher? Like 100 or so?

English Version (LLM-Generated)

The article “One Mobile, Multiple Screens” discusses the usage of mobile devices, not failing to mention their inherent capabilities. With the advent of finger-touch interface on mobile devices, people are now able to scroll, touch, and pinch on their screens. However, this article highlights a missing element: a phenomenological approach. It suggests that since many modern mobile devices can detect motion due to in-built accelerometers, why not utilize this feature?

The imaginative concept likens this feature to an act of conducting, wherein the individual, armed with a mobile phone, directs the action. This “hand conducting” idea exceeds the simple interaction of just using one’s fingers for regular mobile activities and engages the whole hand. The author prefers to describe this transition from finger to hand use as a phenomenological and poetic motion.

In exploring this new interactive liberty, people tend to focus solely on the input end- mobile interaction. However, the author points out that equal consideration should also be given to the output end- multiple screens (further stating that devices can be anything from laptops, projectors, to mobiles).

Drawing parallels with an orchestral conductor managing multiple instruments, the author talks about creating a harmonious audio-visual effect around a particular space by 'conducting' the mobile. However, as the interaction process between the user and the multiple screens unfolds, the critical question becomes what content will be manipulated within this act of conduction?

In order to understand the potential of this interaction form, considerations need to be made for the context, specific use cases, and overall storytelling form. An interesting avenue to detect user input could be through the use of JavaScript device motion or acceleration.

This manipulation could induce a variety of effects like transformation/rotation of a 3D Scene, modulation of spatial audio volume, alteration of screen color based on acceleration, or manipulation of text. The concept also probes the possibility of creating a narrative or story that emerges continuously across multiple screens.

The author also contemplates derivative forms of mobile interaction. For example: using a mouse instead of a mobile device to conduct the same actions. Another interesting derivative would be to introduce two or more audiences using separate devices, thereby creating a system of varied inputs and potential harmony or dissonance among the outputs, and seeing the effects when the number of participants drastically increases.

Korean Version (LLM-Generated)

"원 모바일, 여러 화면"이라는 기사에서는 모바일 장치의 사용을 논하며, 이들의 고유한 기능을 언급하지 않을 수 없다. 모바일 장치에 손가락 터치 인터페이스가 도입되면서 사람들은 이제 화면에 스크롤, 터치, 핀치를 할 수 있게 되었다. 그러나 이 기사에서는 경험적 접근법이라는 요소가 누락되어 있다고 강조한다. 현대의 많은 모바일 장치는 내장 가속도계 때문에 움직임을 감지할 수 있음에 따라, 이 기능을 활용하는 것이 어떨까?

이 창의적인 개념은 개인이 모바일 전화를 들고 행동을 지시하는 연주를 이 기능에 비유한다. 이 "손 연주" 아이디어는 일반적인 모바일 활동을 위해 손가락만 사용하는 간단한 상호작용을 초월하고 전체 손을 활용하게 한다. 저자는 손가락에서 손 사용으로의 이전을 경험적이고 시적인 움직임이라고 표현하는 것을 선호한다.

이 새로운 상호작용의 자유를 탐색하면서, 사람들은 대체로 입력 부분, 즉 모바일 상호작용에만 초점을 맞추는 경향이 있다. 그러나 저자는 출력 부분, 즉 여러 화면(장치가 노트북, 프로젝터, 모바일 등 어떤 것이든 될 수 있다는 점을 더욱 언급)에도 동일한 고려를 해야한다고 지적한다.

오케스트라 지휘자가 여러 악기를 관리하는 것에 비유하여, 저자는 특정 공간 주변의 조화롭고 화려한 시청감 도를 '지휘'하는 모바일을 만드는 것을 이야기합니다. 그러나 사용자와 여러 화면 간의 상호작용 과정이 전개되면서, 관건은 이 연주 중에 어떤 내용이 조작될 것인가?

이 상호작용 형태의 잠재력을 이해하려면, 맥락, 특정 사용 사례, 그리고 전반적인 스토리텔링 형태를 고려해야 한다. 사용자 입력을 감지하는 흥미로운 방법은 자바스크립트 장치의 움직임 또는 가속도를 사용하는 것일 수 있다. 이 조작은 3D 씬의 변환/회전, 공간 오디오 볼륨의 조절, 가속도를 기반으로 한 화면 색상 변경, 또는 텍스트의 조작과 같은 다양한 효과를 유발할 수 있다. 개념 역시 여러 화면 간에 지속적으로 등장하는 서사 또는 이야기를 만드는 가능성을 탐색한다.

저자는 또한 모바일 상호작용의 파생 형태에 대하여도 생각한다. 예를 들어, 모바일 장치 대신 마우스를 사용하여 동일한 동작을 수행하는 것이다. 또 다른 흥미로운 파생형은 둘 이상의 관객이 별도의 장치를 사용하여 다양한 입력과 잠재적인 조화 또는 불협화음을 만드는 시스템을 만들고, 참가자 수가 급격히 증가할 때 효과를 보는 것이다.

Tags

Conducting

Conducting Mobiles

Accelerometer

MDWA

Interactive Art

Idea



Text written by Jeanyoon Choi

Ⓒ Jeanyoon Choi, 2024