Wauplin HF staff commited on
Commit
8cd0f50
1 Parent(s): add378b

Set `library_name` to `tf-keras`.

Browse files

Model 'keras-io/monocular-depth-estimation' seems to be compatible only with "Keras 2" and not "Keras 3". To distinguish them, models compatible with legacy Keras 2.x should be tagged as `tf-keras` while models compatible with Keras 3.x are tagged as `keras`.

This PR updates the model card to replace the explicit `library_name: keras` metadata which is now outdated by `library_name: tf-keras`. Updating this metadata will facilitate its discoverability and usage.

For more information about `keras` and `tf-keras` library names, check out this pull request: https://github.com/huggingface/huggingface.js/pull/774.

Files changed (1) hide show
  1. README.md +3 -58
README.md CHANGED
@@ -1,58 +1,3 @@
1
- ---
2
- tags:
3
- - image-segmentation
4
- library_name: keras
5
- ---
6
- ## Model description
7
- The original idea from Keras examples [Monocular depth estimation](https://keras.io/examples/vision/depth_estimation/) of author [Victor Basu](https://www.linkedin.com/in/victor-basu-520958147/)
8
-
9
- Full credits go to [Vu Minh Chien](https://www.linkedin.com/in/vumichien/)
10
-
11
- Depth estimation is a crucial step towards inferring scene geometry from 2D images. The goal in monocular depth estimation is to predict the depth value of each pixel or infer depth information, given only a single RGB image as input.
12
-
13
- ## Dataset
14
- [NYU Depth Dataset V2](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html) is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft Kinect.
15
-
16
- ## Training procedure
17
-
18
- ### Training hyperparameters
19
- **Model architecture**:
20
- - UNet with a pretrained DenseNet 201 backbone.
21
-
22
- The following hyperparameters were used during training:
23
- - learning_rate: 1e-04
24
- - train_batch_size: 16
25
- - seed: 42
26
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
27
- - lr_scheduler_type: ReduceLROnPlateau
28
- - num_epochs: 10
29
-
30
- ### Training results
31
-
32
- | Epoch | Training loss | Validation Loss | Learning rate |
33
- |:------:|:-------------:|:---------------:|:-------------:|
34
- | 1 | 0.1333 | 0.1315 | 1e-04 |
35
- | 2 | 0.0948 | 0.1232 | 1e-04 |
36
- | 3 | 0.0834 | 0.1220 | 1e-04 |
37
- | 4 | 0.0775 | 0.1213 | 1e-04 |
38
- | 5 | 0.0736 | 0.1196 | 1e-04 |
39
- | 6 | 0.0707 | 0.1205 | 1e-04 |
40
- | 7 | 0.0687 | 0.1190 | 1e-04 |
41
- | 8 | 0.0667 | 0.1177 | 1e-04 |
42
- | 9 | 0.0654 | 0.1177 | 1e-04 |
43
- | 10 | 0.0635 | 0.1182 | 9e-05 |
44
-
45
-
46
-
47
- ### View Model Demo
48
-
49
- ![Model Demo](./demo.png)
50
-
51
-
52
- <details>
53
-
54
- <summary> View Model Plot </summary>
55
-
56
- ![Model Image](./model.png)
57
-
58
- </details>
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71ab91e079cd80b4bc2d7ad05667d9b9f634bb1fd260426553fd8deabbde390f
3
+ size 2080