공지사항

[STRADVISION] Sr. AI Engineer 채용

작성자: 이상원

작성 날짜: 2025/12/08 (월) 오전 11:37


[STRADVISION] Sr. AI Engineer (Generative Vision / Realistic-Novel-View Synthesis)


[About STRADVISION]
We Empower Everything ​To ​Perceive Intelligently
STRADVISION은 ​위와 같은 미션을 ​가지고, AI기반의 ​카메라 ​인식기술로 모두에게 ​더 ​나은 ​삶을 만들어가고 있습니다. ​우리가 ​만든 소프트웨어가 모든 ​것을 ​빈틈없이 ​정확하게 인식하여, 세상의 ​올바른 의미를 ​전달할 ​수 있도록 ​현재 전 ​세계 ​8개 오피스에서 300명 ​이상의 동료들과 ​Vision Perception AI 기술을 만들어가고 있습니다.
더 나은 세상을 만들기 위한 VISION AI 기술, STRADVISION의 의미 있는 도전을 함께 만들어갈 동료를 찾습니다.  

스트라드비젼 최신 소식 알아보기
📌
📌
📌

[Our Technology]
🚗

[Mission of the Role]
Build and productionizenovel-view synthesis (side/rear views) and asemantically consistent BEV/Occupancy-based “Vector-Space/World Model” on top of ourMV-Gen2 multi-camera vision stack. You will use front-camera video, LiDAR, and CAN/IMU/GNSS logs to achievehigh-fidelity, real-time scene reconstruction, prediction, and synthesis, and shipproduction-grade models/pipelines that integrate with Path Planning/Control.
This role is a unique opportunity to work on high-impact, cutting-edge research that directly contributes to the development of next-generation autonomous driving systems.

[Key Responsibilities]
The selected candidate will be responsible for designing, developing, and optimizing deep learning models forGenerative Vision / BEV & Realistic-Novel-View Synthesis for ADAS/Autonomy.
  • - 2019년, STRADVISION은 전 세계 딥러닝 기술 기반 스타트업 중 최초로 유럽 ASPICE CL2 인증을 획득하였고, 2020년에는 AVT ACES 자율주행 차량 혁신상을 수상하였습니다.
  • - 또한, 글로벌 자율주행 기업들과 전문가들의 경연장인 AutoSens Award에서는 2년 연속(2021, 2022) ‘객체 인식 부문 최고상’(Gold Award Winner)을 수상하였습니다.
  • - 2022년 8월, 1076억 규모로 마무리된 시리즈C 펀딩에는 미국의 Aptiv와 독일의 ZF Group이 전략적 투자자로 참여하여 STRADVISION의 우수한 기술력이 전 세계로부터 인정받았습니다.
  • - Deep Neural Network 관련 미국 특허 167개를 보유하고 있는 STRADVISION은, 차별화된 기술력 확보를 위해 오늘도 연구 개발을 적극적으로 하고 있습니다.
  • - 2025년 상반기 SVNet 탑재 차량 글로벌 누적 400만 대를 돌파하며 경기 둔화·업계 경쟁 심화에도 성장세 지속하고 있습니다.
  • - 2025년, 국내 ‘AI 경쟁력’ 삼성전자/네이버/LG 등의 뒤를 이어 10위에 선정되었습니다.

  • Generative Vision / World Model R&D
  • - Design scene understanding, prediction, and synthesis using BEV/Occupancy/3D representations (generate side/rear camera views with strong texture/geometry consistency).
  • Multi-Sensor Fusion
  • - Fuse front/surround cameras + LiDAR + CAN/IMU/GNSS for a 4D dynamic scene representation, including ego-motion/pose estimation.
  • Model Architecture
  • - Use Transformer/Diffusion/NeRF-variants/Gaussian Splatting/Video Autoencoders to enforcespatio-temporal consistency undergeometric constraints.
  • Training Pipeline
  • - Build large-scale training/eval pipelines for in-house driving logs and public datasets (nuScenes/Waymo/Argoverse2, etc.), including self-supervised and weakly supervised learning.
  • Real-Time Optimization
  • - Optimize for CUDA/TensorRT/ONNX and (optionally) TDA4VM/Orin; manage multithreading andlatency/memory budgets.
  • Quantitative Evaluation
  • - Track PSNR/SSIM/LPIPS + geometric consistency (Depth/Flow/Epipolar), BEV/Occupancy IoU/mAP, temporal stability, and end-to-end planning impact.
  • Production Integration
  • - Integrate Perception → Planning/Simulation (replay/augmentation), automate data generation/augmentation and QA gates.
  • Research-to-Production
  • - Monitor literature, inject domain constraints, and bridge SOTA to production using ablations, KD/distillation, and pragmatic trade-offs.


[Basic Qualifications]
  • - Master’s degree with 5+ years of relevant industry experience or Ph.D. with 1+ years (or equivalent), with a total of 5–8+ years of hands-on Deep Learning experience across computer vision, machine learning, or robotics domains.
  • - Strong programming skills : Python required, with C++/CUDA preferred; able to design systems with careful consideration of performance-memory-latency trade-offs.
  • - Strong foundation in 3D geometry and multiview perception, including camera intrinsics/extrinsics/distortion modeling, coordinate transformations, PnP and bundle adjustment, depth/optical flow estimation, and BEV/occupancy representations.
  • - Hands-on experience with Temporal/Transformer/Diffusion (at least 1 stack), including large-scale training and hyperparameter tuning.
  • - Demonstrated expertise in building large-scale video and multiview training pipelines in PyTorch, including distributed training, mixed precision, checkpointing, logging, and replay mechanisms.
  • - Experience in driving log data engineering(video-LiDAR-CAN alignment, timestamping and sensor drift handling).

[Recruitment Process]
  • - Application Review > Recruiter Phone Screening(if required) > Coding Test > Tech Interview(s) > Reference Check (above 5yrs) > Offer > Onboarding
  •    - Please be aware of that the recruitment processes & schedules may be changed depending on the job and/or other circumstances.

[지원 방법]

[STRADVISION 채용팀 문의메일:recruiting@stradvision.com]