You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jun 9, 2021. It is now read-only.
Copy file name to clipboardExpand all lines: README.md
+31-15
Original file line number
Diff line number
Diff line change
@@ -3,11 +3,11 @@
3
3
4
4
### INTRODUCTION
5
5
6
-
This pre-release delivers hardware-accelerated TensorFlow and TensorFlow Addons for macOS 11.0+. Native hardware acceleration is supported on Macs with M1 and Intel-based Macs through Apple’s [ML Compute](https://developer.apple.com/documentation/mlcompute) framework.
6
+
This pre-release delivers hardware-accelerated TensorFlow and TensorFlow Addons for macOS 11.0+. Native hardware acceleration is supported on M1 Macs and Intel-based Macs through Apple’s [ML Compute](https://developer.apple.com/documentation/mlcompute) framework.
7
7
8
8
### CURRENT RELEASE
9
9
10
-
- 0.1-alpha2
10
+
- 0.1-alpha3
11
11
12
12
### SUPPORTED VERSIONS
13
13
@@ -17,35 +17,46 @@ This pre-release delivers hardware-accelerated TensorFlow and TensorFlow Addons
17
17
### REQUIREMENTS
18
18
19
19
- macOS 11.0+
20
-
- Python 3.8, available from the [Xcode Command Line Tools](https://developer.apple.com/download/more/?=command%20line%20tools).
20
+
- Python 3.8 (required to be downloaded from [Xcode Command Line Tools](https://developer.apple.com/download/more/?=command%20line%20tools) for M1 Macs).
21
21
22
22
### INSTALLATION
23
23
24
24
An archive containing Python packages and an installation script can be downloaded from the [releases](https://github.com/apple/tensorflow_macos/releases).
25
25
26
-
#### Details
27
-
28
26
- To quickly try this out, copy and paste the following into Terminal:
This will verify your system, ask you for confirmation, then create a virtual environment(https://docs.python.org/3.8/tutorial/venv.html) with TensorFlow for macOS installed.
32
+
This will verify your system, ask you for confirmation, then create a [virtual environment](https://docs.python.org/3.8/tutorial/venv.html) with TensorFlow for macOS installed.
35
33
36
34
- Alternatively, download the archive file from the [releases](https://github.com/apple/tensorflow_macos/releases). The archive contains an installation script, accelerated versions of TensorFlow, TensorFlow Addons, and needed dependencies.
This pre-release version supports installation and testing using the Python from Xcode Command Line Tools. See [#153](https://github.com/apple/tensorflow_macos/issues/153) for more information on installation in a Conda environment.
41
46
42
47
#### Notes
43
48
44
-
For Macs with M1, the following packages are currently unavailable:
49
+
For M1 Macs, the following packages are currently unavailable:
45
50
46
51
- SciPy and dependent packages
47
52
- Server/Client TensorBoard packages
48
53
54
+
When installing pip packages in a virtual environment, you may need to specify `--target` as follows:
Please submit feature requests or report issues via [GitHub Issues](https://github.com/apple/tensorflow_macos/issues).
@@ -56,16 +67,24 @@ Please submit feature requests or report issues via [GitHub Issues](https://gith
56
67
57
68
It is not necessary to make any changes to your existing TensorFlow scripts to use ML Compute as a backend for TensorFlow and TensorFlow Addons.
58
69
59
-
There is an optional `mlcompute.set_mlc_device(device_name=’any')` API for ML Compute device selection. The default value for `device_name` is `'any’`, which means ML Compute will select the best available device on your system, including multiple GPUs on multi-GPU configurations. Other available options are `‘cpu’` and `‘gpu’`. Please note that in eager mode, ML Compute will use the CPU. For example, to choose the CPU device, you may do the following:
70
+
There is an optional `mlcompute.set_mlc_device(device_name='any')` API for ML Compute device selection. The default value for `device_name` is `'any'`, which means ML Compute will select the best available device on your system, including multiple GPUs on multi-GPU configurations. Other available options are `'cpu'` and `'gpu'`. Please note that in eager mode, ML Compute will use the CPU. For example, to choose the CPU device, you may do the following:
60
71
61
72
```
62
73
# Import mlcompute module to use the optional set_mlc_device API for device selection with ML Compute.
63
74
from tensorflow.python.compiler.mlcompute import mlcompute
64
75
65
76
# Select CPU device.
66
-
mlcompute.set_mlc_device(device_name=‘cpu’) # Available options are 'cpu', 'gpu', and ‘any'.
77
+
mlcompute.set_mlc_device(device_name='cpu') # Available options are 'cpu', 'gpu', and 'any'.
67
78
```
68
79
80
+
#### Unsupported TensorFlow Features
81
+
82
+
The following TensorFlow features are currently not supported in this fork:
@@ -96,10 +115,7 @@ Unlike graph mode, logging in eager mode is controlled by `TF_CPP_MIN_VLOG_LEVEL
96
115
97
116
- Larger models being trained on the GPU may use more memory than is available, resulting in paging. If this happens, try decreasing the batch size or the number of layers.
98
117
- TensorFlow is multi-threaded, which means that different TensorFlow operations, such as` MLCSubgraphOp`, can execute concurrently. As a result, there may be overlapping logging information. To avoid this during the debugging process, set TensorFlow to execute operators sequentially by setting the number of threads to 1 (see [`tf.config.threading.set_inter_op_parallelism_threads`](https://www.tensorflow.org/api_docs/python/tf/config/threading/set_inter_op_parallelism_threads)).
99
-
100
-
##### Additional tips for debugging in eager mode:
101
-
102
-
- To find information about a specific tensor in the log, search for its buffer pointer in the log. If the tensor is defined by an operation that ML Compute does not support, you will need to cast it to `size_t` and search for it in log entries with the pattern `MemoryLogTensorAllocation ... true ptr: <(size_t)ptr>`. You may also need to modify the `OpKernelContext::input()` to print out the input pointer so that you can see the entire use-def chain in the log.
103
-
- You may disable the conversion of any eager operation to ML Compute by using `TF_DISABLE_MLC_EAGER=“;Op1;Op2;...”`. The gradient op may also need to be disabled by modifying the file `$PYTHONHOME/site-packages/tensorflow/python/ops/_grad.py` (this avoids TensorFlow recompilation).
118
+
- In eager mode, you may disable the conversion of any operation to ML Compute by using `TF_DISABLE_MLC_EAGER=“;Op1;Op2;...”`. The gradient op may also need to be disabled by modifying the file `$PYTHONHOME/site-packages/tensorflow/python/ops/_grad.py` (this avoids TensorFlow recompilation).
104
119
- To initialize allocated memory with a specific value, use `TF_MLC_ALLOCATOR_INIT_VALUE=<init-value>`.
120
+
- To disable ML Compute acceleration (e.g. for debugging or results verification), set the environment variable `TF_DISABLE_MLC=1`.
0 commit comments