国产精品嫩草影院AV_国产95在线 | 欧美_欧美色欧美亚洲另类二区_亚洲Va中文字幕无码久久

位置:首頁 > 產品展示 > CAVE虛擬現(xian)實系統

產品展示

產品展示

ErgoVR沉浸式CAVE虛擬仿真實驗室

產品型號:ErgoVR CAVE

 類   型:

CAVE虛擬現實系統

  描   述:

ErgoVR沉浸式CAVE虛擬現實實驗室可提供光環境與視覺模擬、聲環境與聽覺模擬、氣味與嗅覺模擬、人機交互與觸覺反饋模擬、人機交互測評、人機環境測試、人機工效分析、人因設計與虛擬裝配、虛擬展示、虛擬訓練等技術服務。

咨詢報價 預約體驗

ErgoVR人(ren)(ren)(ren)機(ji)交(jiao)互CAVE沉浸式(shi)虛(xu)擬(ni)仿真實驗室由津發(fa)科(ke)技(ji)自主研發(fa)的(de)ErgoLAB虛(xu)擬(ni)世界人(ren)(ren)(ren)機(ji)環(huan)境(jing)同步云平臺、CAVE虛(xu)擬(ni)現實系統(tong)(tong)、ErgoVR人(ren)(ren)(ren)機(ji)工效分(fen)析(xi)系統(tong)(tong)、ErgoHMI人(ren)(ren)(ren)機(ji)交(jiao)互評(ping)(ping)估系統(tong)(tong)、美國WorldViz頭戴(dai)式(shi)行走虛(xu)擬(ni)現實系統(tong)(tong)等(deng)核心部(bu)件組(zu)成,CAVE洞(dong)穴式(shi)虛(xu)擬(ni)現實系統(tong)(tong)是一(yi)個大型的(de)可支持多用戶的(de)沉浸式(shi)虛(xu)擬(ni)現實顯(xian)示交(jiao)互環(huan)境(jing),能(neng)夠為(wei)用戶提供大范圍視野的(de)高分(fen)辨(bian)率及高質量的(de)立體影像,讓(rang)虛(xu)擬(ni)環(huan)境(jing)完全媲美真實世界,為(wei)用戶提供光環(huan)境(jing)與(yu)視覺(jue)模擬(ni)、聲環(huan)境(jing)與(yu)聽覺(jue)模擬(ni)、氣味與(yu)嗅覺(jue)模擬(ni)、人(ren)(ren)(ren)機(ji)交(jiao)互與(yu)觸(chu)覺(jue)反(fan)饋(kui)模擬(ni)、人(ren)(ren)(ren)機(ji)交(jiao)互測評(ping)(ping)、人(ren)(ren)(ren)機(ji)環(huan)境(jing)測試、人(ren)(ren)(ren)機(ji)工效分(fen)析(xi)、人(ren)(ren)(ren)因設計與(yu)虛(xu)擬(ni)裝配、虛(xu)擬(ni)展示、虛(xu)擬(ni)訓(xun)練等(deng)技(ji)術(shu)服務。

ErgoVR虛(xu)(xu)擬現實(shi)同步模塊(kuai)進(jin)行(xing)視覺、聽覺、嗅覺、觸覺和(he)人(ren)機交(jiao)互(hu)模擬,ErgoLAB人(ren)機環境(jing)同步云平臺由可(ke)穿戴(dai)生(sheng)理記錄(lu)模塊(kuai)、VR眼(yan)動(dong)(dong)追蹤模塊(kuai)、可(ke)穿戴(dai)腦(nao)電測(ce)量模塊(kuai)、交(jiao)互(hu)行(xing)為觀察模塊(kuai)、生(sheng)物力(li)學測(ce)量模塊(kuai)、環境(jing)測(ce)量模塊(kuai)等組成。實(shi)現在進(jin)行(xing)人(ren)機環境(jing)或者人(ren)類心理行(xing)為研究時(shi)結(jie)(jie)合虛(xu)(xu)擬現實(shi)技術,基于(yu)三維(wei)虛(xu)(xu)擬現實(shi)環境(jing)變化的情況下(xia)實(shi)時(shi)同步采集人(ren)-機-環境(jing)定(ding)量數據(包括如眼(yan)動(dong)(dong)、腦(nao)波、呼(hu)吸(xi)、心律、脈搏、皮電、皮溫(wen)、心電、肌電、肢體動(dong)(dong)作(zuo)、關節(jie)角度、人(ren)體壓力(li)、拉力(li)、握力(li)、捏力(li)、振(zhen)動(dong)(dong)、噪(zao)聲、光照、大氣壓力(li)、溫(wen)濕度等物理環境(jing)數據)并進(jin)行(xing)分(fen)析評價,所獲(huo)取的定(ding)量結(jie)(jie)果為科(ke)學研究做客觀數據支撐。

作為該套系統方案的(de)核心數據同步(bu)采集與分析平臺,ErgoLAB人(ren)機環(huan)境(jing)(jing)同步(bu)平臺不僅(jin)支(zhi)持虛擬(ni)現(xian)實(shi)(shi)(shi)環(huan)境(jing)(jing),也支(zhi)持基于(yu)真實(shi)(shi)(shi)世界的(de)戶外現(xian)場研(yan)究(jiu)(jiu)、以及基于(yu)實(shi)(shi)(shi)驗室基礎研(yan)究(jiu)(jiu)的(de)實(shi)(shi)(shi)驗室研(yan)究(jiu)(jiu),可(ke)以在任(ren)意的(de)實(shi)(shi)(shi)驗環(huan)境(jing)(jing)下采集多元(yuan)數據并進行定量評價。(人(ren)機環(huan)境(jing)(jing)同步(bu)平臺含虛擬(ni)現(xian)實(shi)(shi)(shi)同步(bu)模(mo)(mo)塊(kuai)(kuai)、可(ke)穿戴生理記錄模(mo)(mo)塊(kuai)(kuai)、虛擬(ni)現(xian)實(shi)(shi)(shi)眼(yan)動追蹤模(mo)(mo)塊(kuai)(kuai)、可(ke)穿戴腦電測量模(mo)(mo)塊(kuai)(kuai)、交互(hu)行為觀察模(mo)(mo)塊(kuai)(kuai)、生物力學測量模(mo)(mo)塊(kuai)(kuai)、環(huan)境(jing)(jing)測量模(mo)(mo)塊(kuai)(kuai)等組(zu)成)

作為該套(tao)系統方案的核心虛(xu)擬(ni)現(xian)實軟件引擎,WorldViz不僅支(zhi)持虛(xu)擬(ni)現(xian)實頭盔(kui),還可為用戶提供(gong)優(you)質(zhi)的應(ying)用內容。結合行走運(yun)動追蹤(zong)系統、虛(xu)擬(ni)人機(ji)交互系統,使用者最終完(wan)成與虛(xu)擬(ni)場景及內容的互動交互操作。

應用領域

BIM環境行為研究虛擬仿真實驗室解決方案:建筑感性設計、環境行為、室內設計、人居環境研究等;

 

交互設計虛擬仿真實驗室解決方案:虛擬規劃、虛擬設計、虛擬裝配、虛擬評審、虛擬訓練、設備狀態可視化等;

 

軍工國防武器裝備人機環境虛擬仿真實驗室解決方案:武器裝備人機環境系統工程研究以及軍事心理學應用,軍事訓練、軍事教育、作戰指揮、武器研制與開發等;

 

用戶體驗與可用性研究虛擬仿真實驗室方案:游戲體驗、體驗類運動項目、影視類娛樂、多人參與的娛樂項目。

 

虛擬購物消費行為研究實驗室方案

 

安全人機與不安全行為虛擬仿真實驗室方案

 

駕駛行為虛擬仿真實驗室方案

 

人因工程與作業研究虛擬仿真實驗室方案

 

 

其(qi)用(yong)戶遍布(bu)各個應用(yong)領域(yu),包括教育和心理、培訓、建筑設計、軍事航天、醫療、娛樂、圖形建模等。同時該產(chan)品在認(ren)知相關的科(ke)研領域(yu)更具競爭力,在歐美(mei)和國內高等學(xue)府和研究機構擁(yong)有五百個以上用(yong)。

1)、加州大學圣巴巴拉分校虛擬環境與行為研究中心

該實驗室主要致力于心理認知相關的科學研究,包括社會心理學、視覺、空間認知等,并有大量論文在國際知名刊物發表,具體詳見論文列表。

2)、邁阿密(mi)大學心理與計(ji)算機科學實(shi)驗室

研(yan)究領域:空間認知

Human Spatial Cognition In his research Professor David Waller investigates how people learn and mentally represent spatial information about their environment. Wearing a head-mounted display and carrying a laptop-based dual pipe image generator in a backpack, users can wirelessly walk through extremely large computer generated virtual environments.

Research Project Examples Specificity of Spatial Memories When people learn about the locations of objects in a scene, what information gets represented in memory? For example, do people only remember what they saw, or do they commit more abstract information to memory? In two projects, we address these questions by examining how well people recognize perspectives of a scene that are similar but not identical to the views that they have learned. In a third project, we examine the reference frames that are used to code spatial information in memory. In a fourth project, we investigate whether the biases that people have in their memory for pictures also occur when they remember three-dimensional scenes.

Nonvisual Egocentric Spatial Updating When we walk through the environment, we realize that the objects we pass do not cease to exist just because they are out of sight (e.g. behind us). We stay oriented in this way because we spatially update (i.e., keep track of changes in our position and orientation relative to the environment.)

網站鏈接(jie) //www.users.muohio.edu/wallerda/spacelab/spacelabproject.html

 

3)、加(jia)拿大(da)滑鐵盧大(da)學心理系

設備: WorldViz Vizard 3D software toolkit, WorldViz PPT H8 optical inertial hybrid wide-area tracking system, NVIS nVisor SX head-mounted display, Arrington Eye Tracker

研(yan)究領(ling)域:行為科學

Professor Colin Ellard about his research: I am interested in how the organization and appearance of natural and built spaces affects movement, wayfinding, emotion and physiology. My approach to these questions is strongly multidisciplinary and is informed by collaborations with architects, artists, planners, and health professionals. Current studies include investigations of the psychology of residential design, wayfinding at the urban scale, restorative effects of exposure to natural settings, and comparative studies of defensive responses. My research methods include both field investigations and studies of human behavior in immersive virtual environments.

網站(zhan)鏈接    //virtualpsych.uwaterloo.ca/research.htm //www.colinellard.com/

 

部分發表論文(wen): Colin Ellard (2009). Where am I? Why we can find our way to the Moon but get lost in the mall. Toronto: Harper Collins Canada.

Journal Articles: Colin Ellard and Lori Wagar (2008). Plasticity of the association between visual space and action space in a blind-walking task. Perception, 37(7), 1044-1053.

Colin Ellard and Meghan Eller (2009). Spatial cognition in the gerbil: Computing optimal escape routes from visual threats. Animal Cognition, 12(2), 333-345.

Posters: Kevin Barton and Colin Ellard (2009). Finding your way: The influence of global spatial intelligibility and field-of-view on a wayfinding task. Poster session presented at the 9th annual meeting of the Vision Sciences Society, Naples, FL. (Link To Poster)

Brian Garrison and Colin Ellard (2009). The connection effect in the disconnect between peripersonal and extrapersonal space. Poster session presented at the 9th annual meeting of the Vision Sciences Society, Naples, FL. (Link To Poster)

 

4)、美國斯坦福大學信息(xi)學院(yuan)虛(xu)擬人(ren)交互實(shi)驗室

設(she)備: WorldViz Vizard 3D software toolkit, WorldViz PPT X8 optical inertial hybrid wide-area tracking system, NVIS nVisor SX head-mounted display, Complete Characters avatar package

The mission of the Virtual Human Interaction Lab is to understand the dynamics and implications of interactions among people in immersive virtual reality simulations (VR), and other forms of human digital representations in media, communication systems, and games. Researchers in the lab are most concerned with understanding the social interaction that occurs within the confines of VR, and the majority of our work is centered on using empirical, behavioral science methodologies to explore people as they interact in these digital worlds. However, oftentimes it is necessary to develop new gesture tracking systems, three-dimensional modeling techniques, or agent-behavior algorithms in order to answer these basic social questions. Consequently, we also engage in research geared towards developing new ways to produce these VR simulations.

Our research programs tend to fall under one of three larger questions:

      1. What new social issues arise from the use of immersive VR communication systems?

      2. How can VR be used as a basic research tool to study the nuances of face-to-face interaction?

      3. How can VR be applied to improve everyday life, such as legal practices, and communications systems.

 

網站(zhan)鏈接(jie): //vhil.stanford.edu/

 

5)、加州大學圣迭戈分校神經科學實驗室

設備: WorldViz Vizard 3D software toolkit, WorldViz PPT X8 optical inertial hybrid wide-area tracking system, NVIS nVisor SX head-mounted display

The long-range objective of the laboratory is to better understand the neural bases of human sensorimotor control and learning. Our approach is to analyze normal motor control and learning processes, and the nature of the breakdown in those processes in patients with selective failure of specific sensory or motor systems of the brain. Toward this end, we have developed novel methods of imaging and graphic analysis of spatiotemporal patterns inherent in digital records of movement trajectories. We monitor movements of the limbs, body, head, and eyes, both in real environments and in 3D multimodal, immersive virtual environments, and recently have added synchronous recording of high-definition EEG. One domain of our studies is Parkinson's disease. Our studies have been dissecting out those elements of sensorimotor processing which may be most impaired in Parkinsonism, and those elements that may most crucially depend upon basal ganglia function and cannot be compensated for by other brain systems. Since skilled movement and learning may be considered opposite sides of the same coin, we also are investigating learning in Parkinson’s disease: how Parkinson’s patients learn to adapt their movements in altered sensorimotor environments; how their eye-hand coordination changes over the course of learning sequences; and how their neural dynamics are altered when learning to make decisions based on reward. Finally, we are examining the ability of drug versus deep brain stimulation therapies to ameliorate deficits in these functions.

網站鏈接: //inc2.ucsd.edu/poizner/index.html

論(lun)文列表: //inc2.ucsd.edu/poizner/publications.html 

 方案特點

1、核心虛擬現實引擎 兼容多種三(san)維應用程序

系(xi)統內置核(he)心虛(xu)擬現實軟件引擎,能無縫支持多種三維應(ying)用(yong)程序,快速獲取設計(ji)成果(guo)進行(xing)展示與交互。

2、多通道技(ji)術 完(wan)美沉浸感

專利虛擬現實呈現技(ji)術,實現畫面的無縫拼接和(he)完美融(rong)合,呈現身(shen)臨其境的3D沉浸感(gan)。

3、自主(zhu)研發 基于虛(xu)擬現實(shi)技(ji)術的人機環(huan)境定量評價(jia)為科研(yan)提供客(ke)觀(guan)數據支(zhi)撐

自主研發的ErgoLAB人(ren)機(ji)環境同步平臺,VR同步模塊基(ji)于沉(chen)浸式三維虛擬現實環境,實時同步采集多元數(shu)據并(bing)進行定(ding)量評價(jia),客觀的定(ding)量統計分(fen)析(xi)結果對科學研究提供數(shu)據支撐。

4、完全自然狀態下的(de)行走虛擬現實技術進行人類行為研(yan)究采集數據進行定量(liang)分析更真實。

整個實(shi)(shi)驗(yan)室空(kong)間均為行走虛擬現實(shi)(shi)系統的實(shi)(shi)驗(yan)場地,被(bei)試可以不受任何(he)限制自由行走,模(mo)擬完全真實(shi)(shi)世界(jie)的行為,采集的數據(ju)更(geng)真實(shi)(shi)。

    ErgoHMI駕駛艙人機工效虛擬現實系統

     虛(xu)(xu)擬(ni)現(xian)(xian)實技術為我們發展一種變(bian)革性(xing)的(de)(de)具(ju)有(you)良好生(sheng)態效度(du)和內部效度(du)的(de)(de)實驗(yan)方(fang)法提供了契機。傳(chuan)統的(de)(de)心(xin)理學實驗(yan)往(wang)往(wang)犧牲(sheng)生(sheng)態效度(du)來達(da)到較高的(de)(de)外(wai)部效度(du)。其次,虛(xu)(xu)擬(ni)現(xian)(xian)實技術使得心(xin)理學實驗(yan)可(ke)以(yi)在自然的(de)(de)條件(jian)下進行,從(cong)而更有(you)效地(di)開展有(you)關(guan)人類視知(zhi)覺、運動和認知(zhi)等方(fang)面的(de)(de)研(yan)究。

    介紹 參數

訂購請留言

您需要:

  • 獲取產品資料
  • 獲取解決方案
  • 設備預約體驗
  • 實驗室規劃設計
  • 制定招標參數

姓名:

電話:

郵箱:

學院院系/企業(ye)名(ming)稱:

研究方向:

經費預算:

  • 50萬以內
  • 100萬以內
  • 200萬以內
  • 300萬以內
  • 500萬以內
  • 1000萬以內
  • 2000萬以內
  • 5000萬以內
  • 不限
推(tui)薦產品

QQ客服:

 4008113950 

服務熱線:

 4008113950

公(gong)司郵箱:Kingfar@suixiangcoffee.cn

微信聯系:

 13021282218

微信公眾號