5️⃣Depth Estimation
Depth Estimation
from transformers import pipeline
estimator = pipeline(
task="depth-estimation",
model="Intel/dpt-large"
)
result = estimator(images="http://images.cocodataset.org/val2017/000000039769.jpg")
resultconfig.json: 0%| | 0.00/942 [00:00<?, ?B/s]
model.safetensors: 0%| | 0.00/1.37G [00:00<?, ?B/s]
Some weights of DPTForDepthEstimation were not initialized from the model checkpoint at Intel/dpt-large and are newly initialized: ['neck.fusion_stage.layers.0.residual_layer1.convolution1.bias', 'neck.fusion_stage.layers.0.residual_layer1.convolution1.weight', 'neck.fusion_stage.layers.0.residual_layer1.convolution2.bias', 'neck.fusion_stage.layers.0.residual_layer1.convolution2.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
preprocessor_config.json: 0%| | 0.00/285 [00:00<?, ?B/s]
{'predicted_depth': tensor([[[ 6.3199, 6.3629, 6.4148, ..., 10.4104, 10.5109, 10.3847],
[ 6.3850, 6.3615, 6.4166, ..., 10.4540, 10.4384, 10.4554],
[ 6.3519, 6.3176, 6.3575, ..., 10.4247, 10.4618, 10.4257],
...,
[22.3772, 22.4624, 22.4227, ..., 22.5207, 22.5593, 22.5293],
[22.5073, 22.5148, 22.5115, ..., 22.6604, 22.6345, 22.5871],
[22.5177, 22.5275, 22.5218, ..., 22.6282, 22.6216, 22.6108]]]),
'depth': <PIL.Image.Image image mode=L size=640x480>}
Other Full code on CPU
Last updated


