
KINGDOM
《KINGDOM》為舞團 [安娜琪舞蹈劇場 Anarchy Dance Theatre]的青年創作者趙亭婷的第一個舞蹈科技作品,是一場XR跨域共構的實驗性表演作品,結合了3D掃描技術與動態捕捉系統於VR之中,透過現場多螢幕的拼湊與舞者的身體語彙,將破碎、液化且不完整的物件掃描檔案,構築出一座無人居住的數位王國,在不斷崩解與重組的數位環境裡,不停地適應新的認知與環境。
我受邀為此作品進行VR開發,並攜手深耕於VR創作的藝術家鄭依婷為此打造作品所形塑的虛擬世界,以及與現場劇場空間的橋接串聯。
我受邀為此作品進行VR開發,並攜手深耕於VR創作的藝術家鄭依婷為此打造作品所形塑的虛擬世界,以及與現場劇場空間的橋接串聯。
KINGDOM is the first dance-technology work by emerging choreographer Ting-Ting Chao of Anarchy Dance Theatre. The piece is an experimental XR performance that integrates 3D scanning and motion capture within a VR environment. Through multi-screen arrangements and the dancers’ physical language, fragmented and incomplete scanned objects are assembled into an uninhabited digital kingdom, continuously collapsing and reforming as it adapts to new perceptions and environments.
I was invited to lead the VR development of the project, collaborating with VR artist Yi-Ting Cheng to create the virtual world of KINGDOM and to bridge it with the physical theatre space.
I was invited to lead the VR development of the project, collaborating with VR artist Yi-Ting Cheng to create the virtual world of KINGDOM and to bridge it with the physical theatre space.
Concept
“在虛擬世界中,空間與時間被切割、壓縮與延展,彷彿成了一層層可被讀取的切片,身體不再有固定的位置,而是跟隨著失衡的記憶斷層,懸浮在虛實之間。”
身處於網路與數位技術高速發展的世代,我們早已熟練地運用各式媒介捕捉與記錄周遭事物。然而,在這樣不斷更新的洪流中,《KINGDOM》正試圖邀請觀眾,一同回望那些被遺落、被忽視、被淘汰的事物。那些日常的顯而易見,正是我們不曾記得的,也正是如此它長出了自己的個性、色調與溫度。
In the virtual world, space and time are fragmented, compressed, and stretched, becoming layers of readable slices. The body no longer occupies a fixed position; instead, it follows fractured fault lines of memory, floating between the virtual and the real.

本作品我負責的工作為VR開發,我將實際實行內容分為兩部分,一為技術基礎架構建立,如何為這場表演提供所需的技術服務,包含VR虛實環境建立,舞者的動態捕捉,串聯現場表演的聲音和影像等。另一部分為製作作品所需的視覺和效果,如畫面的場景、運鏡、轉場,所需的特效如替身複製、粒子化等等。
I was responsible for the VR development of this project, covering both the technical framework—such as building the mixed reality environment, motion capture, and integration of live sound and visuals—and the creation of visual content and effects, including scene design, camera movement, transitions, and particle-based transformations.
I was responsible for the VR development of this project, covering both the technical framework—such as building the mixed reality environment, motion capture, and integration of live sound and visuals—and the creation of visual content and effects, including scene design, camera movement, transitions, and particle-based transformations.
系統架構
System Architecture
現實與虛擬Reality and Virtuality
此作品的核心設定是虛實疊合的空間,從技術角度稱為 MR(Mixed Reality)。我們在虛擬世界中重建了與現場一致的空間——現實中的柱子在虛擬中也有對應的柱子,表演者在空間中移動時,虛擬中的位置也會同步更新。This work centers on an overlapping space of the real and the virtual, technically defined as Mixed Reality (MR). The virtual environment mirrors the physical site, allowing performers’ movements in real space to be synchronously reflected in the virtual world.
肢體動作同樣需要即時同步到虛擬空間。我們採用遊戲與動畫產業常見的動作捕捉技術(Motion Capture),即時捕捉舞者的動作,並透過遊戲引擎驅動虛擬中的舞者替身。
The dancers’ movements are captured in real time through motion capture technology and translated via a game engine to drive their virtual counterparts.
The dancers’ movements are captured in real time through motion capture technology and translated via a game engine to drive their virtual counterparts.
人物身上紅點為動補系統
作品中有一個橋段,舞者會與物件互動(可移動電視與床)。由於舞者在表演時配戴 VR 裝置,我們必須在虛擬中呈現物件的位置,才能讓舞者準確互動。同時,這些互動關係也會透過現場的多個螢幕呈現給觀眾。為了確保互動的精準度,我們採用 Steam VR 系統,在需要互動的物件上安裝 tracker 來追蹤其空間位置。
In one section of the work, the dancers interact with movable objects, whose spatial positions are tracked using a SteamVR-based system to ensure precise interaction.
In one section of the work, the dancers interact with movable objects, whose spatial positions are tracked using a SteamVR-based system to ensure precise interaction.
這裡遇到了第一個技術難點:如何整合三個不同的系統——VR 裝置、動補系統、物件tracker系統,並讓它們的座標系在虛擬空間中對齊,同時也與現實空間對齊。為此我們是採用 Meta Space Anchor 作為現實空間的參考點,再將所有系統對齊至這個參考點。
The primary technical challenge was aligning multiple tracking systems—including VR, motion capture, and object tracking—within a unified coordinate space that corresponds consistently to the physical environment, achieved by using Meta Spatial Anchor as a common reference point.
The primary technical challenge was aligning multiple tracking systems—including VR, motion capture, and object tracking—within a unified coordinate space that corresponds consistently to the physical environment, achieved by using Meta Spatial Anchor as a common reference point.
系統與效能
System and Performance
VR系統本身即需同時渲染左右眼畫面,本次展演現場配置的螢幕與投影顯示裝置有六個,遊戲引擎同時需要渲染的畫面數量可能超過八個以上,對整體效能要求極高。本次採用的Quest系統的常見開發模式,多為直接在一體機上運行APK,或透過Quest Link以PC-VR模式進行連線。然而前者受限於Quest本身的硬體規格,難以支撐前述多視角、多輸出畫面的渲染需求;後者則在長時間無線連線的穩定性上仍存在風險。
基於上述限制,我們最終採用一種較為少見但更具彈性的架構——以多人連線機制進行系統分離運算。系統由一個電腦端程式與一個VR端APK各自獨立運行,並透過區域網路連線進入同一個Session,運作方式類似於區域網路的多人遊戲架構。
此架構的優勢在於,高負載的多視角渲染與大型畫面輸出由PC端負責,充分發揮電腦硬體的運算效能;而舞者僅需在VR裝置上看到自身視角畫面。即使VR端因裝置效能限制、畫面卡頓,甚至應用程式發生當機,也不會影響現場螢幕與投影的視覺呈現,避免整體展演中斷,確保現場整體視覺呈現品質和穩定度。
VR systems must render stereoscopic views simultaneously, and in this production the engine is required to output more than eight views to support six on-site screens and projections, placing high demands on performance. While Quest development typically runs either as a standalone APK or via Quest Link in PC-VR mode, the former is limited by hardware constraints and the latter raises stability concerns for prolonged wireless use.
To address these challenges, we adopted a distributed architecture based on a multiplayer networking model. A PC application and a VR-based APK run independently while connecting to the same local session, similar to a LAN multiplayer setup. This approach assigns high-load multi-view rendering to the PC, while the performer views only their own perspective in VR. As a result, even if the VR side experiences performance issues or crashes, the audience-facing visuals remain stable and uninterrupted.

現場螢幕配置 4個電視螢幕+1個投影
XR互動物件
Modular Character Parts
Clone Character
Particle-Composed World
作品的最後一段,場景會幻化成由無數個粒子構成,舞者和虛擬替身在其中穿梭、互動,最後整個粒子世界也會爆炸崩塌,逐漸消失
In the final section of the work, the scene transforms into a world composed of countless particles. The dancer and their virtual counterpart move through and interact within this space, as the entire particle world ultimately explodes, collapses, and gradually fades away.

Character Dissolve
Full Credit :
概念及編舞/演出:趙亭婷
VR影像內容暨互動設計:鄭伊婷
VR技術工程:柯柏羽
VR場景設計:許子安
空間設計:江冠男
燈光設計:王宥珺
聲響設計:張欣語
服裝設計:郭 萱
舞臺監督:洪伊柔
影像技術統籌:吳承儒
影像技術執行:李芷萱
影像技術人員:黃楷雯
燈光技術指導:梁弘岳
燈光技術人員:許皓宇、陳品璇、許俞苓、王凱莉
舞臺技術指導:沈承志
空間製作協力:顏宇婕
舞臺技術人員:邵莉喬、蔡東融
聲音執行:陳柏豪
創作陪伴:黃美寧
創作顧問:謝杰樺
科技技術顧問:陳韋安(@chwan1)
藝術與企業合作暨永續顧問:潘思廷
製作人:戴筱凡
專案票務與行政協力:楊舒涵
主視覺設計:徐紹恩
節目單製作:粘馨予
平面攝影:林峻永、林蔚圻(階段性呈現)
動態攝影:駱思維
製作單位:安娜琪舞蹈劇場
贊助單位:革蘭科技股份有限公司、文化部、台北市政府文化局、菁霖文化藝術基金會
合作單位:C-LAB臺灣聲響實驗室、臺北數位藝術媒合平臺、財團法人工業技術研究院
感謝名單:亦樂製造有限公司、長弓舞蹈劇場、美麗合作/許程崴製作舞團、當若科技藝術、謝文毅工作室、王伯宇、林維儀、侯君皓、洪敬庭、凌 天、陳昱伶、黃湘淋、熊世翔、樊香君、韓希妶
本演出由114年文化部「扶植青年藝術發展補助」支持
安娜琪舞蹈劇場為2025TAIWAN TOP演藝團隊
Related works
