Layout Control
The Architecture
When working with the Eyeson API, the layout and layers endpoints are commonly used together to control both the arrangement of video sources and the placement of additional visual elements within the composed stream.
The layout endpoint defines how video feeds are positioned and scaled within the composition grid. At the same time, the layers endpoint allows non-video assets such as images, maps, or status indicators to be rendered on top of or behind the video layout.
Using both endpoints together allows precise control of video and graphical elements, enabling scenarios such as tactical maps alongside camera feeds, telemetry overlays, or branded broadcast layouts. The resulting composition is pixel-accurate and identical for all participants.

Centralized Stream Composition
Eyeson One View is designed around the Multipoint Control Unit (MCU) with additional features. It processes and merges all incoming data streams, such as video, audio, metadata, and other video sources, into a single unified video stream.
The resulting stream dynamically adapts its quality to each participant's network conditions. Because the stream originates from a centralized composition, every participant receives identical visual content. This enables full situational awareness across distributed teams.
Eyeson is ideal for public safety and defense applications where visual information is critical for decision-making. In 3GPP Mission Critical Services (MCX) standards, the system only needs to initiate video clients. The Eyeson Gateway then connects to the clients and delivers a managed video stream.
Additional video sources, such as drones, IP cameras, or bodycams, can also be added to Eyeson sessions. These sources may provide either live feeds or playback content that becomes part of the composited stream.
Layout Endpoint
When multiple video sources are connected to a session, the layout automatically arranges and positions the video streams according to predefined layout patterns. This behavior ensures that incoming video feeds are organized and displayed without requiring manual configuration.
In scenarios where one video source needs to be highlighted, for example, a drone feed or a command unit camera, the layout can be manually defined using the layout endpoint. A typical use case is Virtual Situation Room for first responders, where multidisciplinary units have a unified situational awareness. Another scenario is to visualize a drone fleet with live feeds or playback video from a database.

curl -X POST \
-d "layout=custom" \
-d "name=custom-map" \
-d "map=[ \
[20, 20, 940, 680, \"cover\"], \
[900, 180, 360, 240, \"auto\"], \
[900, 440, 360, 240, \"auto\"] \
]" \
-d "users[]=drone" \
-d "users[]=pilot" \
-d "users[]=coordinator" \
"https://api.eyeson.team/rooms/$ACCESS_KEY/layout"
Layers Endpoint
Operational environments often require additional contextual information that does not originate from video sources alone. In many cases, various types of operational data, telemetry, or external information need to be visually represented alongside the video feed. These can be converted into graphical elements and then integrated into the video stream.
The layers endpoint enables visual content to be rendered either directly on top of the video stream as an overlay or placed behind it as a background within the composed video output. Typical use cases include displaying tactical maps, live mission data, status indicators, situational awareness graphics, and branding elements, all of which help enhance the overall situational awareness in operational scenarios.

curl -X POST \
-d "url=https://docs.eyeson.com/img/examples/overlay_example.png" \
-d "z-index=1" \
"https://api.eyeson.team/rooms/$ACCESS_KEY/layers"
const access = "YOUR_ROOM_ACCESS_KEY";
const sendImage = async (access, canvas) => {
try {
const url = `https://api.eyeson.team/rooms/${access}/layers`;
const blob = await new Promise(resolve => canvas.toBlob(resolve));
const formData = new FormData();
formData.set('file', blob, 'overlay.png');
formData.set('z-index', '1');
const response = await fetch(url, { method: 'POST', body: formData });
if (!response.ok) { throw new Error('Failed to send image'); }
} catch (error) {
alert(error.message);
}
};
const drawCountdown = async (canvas, number) => {
const ctx = canvas.getContext('2d');
ctx.clearRect(0, 0, canvas.width, canvas.height);
ctx.font = "200px Arial";
ctx.textAlign = "center";
ctx.textBaseline = "middle";
ctx.fillStyle = "white";
ctx.fillText(number.toString(), canvas.width / 2, canvas.height / 2);
await sendImage(access, canvas);
};
const startCountdown = async (value = 5) => {
if (isNaN(value) || value < 5) value = 5;
const canvas = document.createElement("canvas");
canvas.width = 1280;
canvas.height = 720;
for (let i = value; i >= 0; i--) {
await drawCountdown(canvas, i);
await new Promise(resolve => setTimeout(resolve, 1000));
}
await setLayout(access);
await deleteImage(access);
};
startCountdown(5);

