You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jun 9, 2021. It is now read-only.
TensorFlow:
- Many bugfixes.
Installation:
- Resolved some installation issues.
- Resolved grpc issue.
- Checks for correct CPU subtype in python executable.
README:
- Clarified Python version requirements.
- Added logging and debugging information.
Copy file name to clipboardExpand all lines: README.md
+58-13
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,13 @@
1
1
## Mac-optimized TensorFlow and TensorFlow Addons
2
2
3
+
3
4
### INTRODUCTION
4
5
5
-
This pre-release delivers hardware-accelerated TensorFlow and TensorFlow Addons for macOS 11.0+. Native hardware acceleration is supported on Macs with M1 and Intel-based Macs through Apple’s [ML Compute](https://developer.apple.com/documentation/mlcompute) framework.
6
+
This pre-release delivers hardware-accelerated TensorFlow and TensorFlow Addons for macOS 11.0+. Native hardware acceleration is supported on Macs with M1 and Intel-based Macs through Apple’s [ML Compute](https://developer.apple.com/documentation/mlcompute) framework.
7
+
8
+
### CURRENT RELEASE
9
+
10
+
- 0.1-alpha1
6
11
7
12
### SUPPORTED VERSIONS
8
13
@@ -12,45 +17,85 @@ This pre-release delivers hardware-accelerated TensorFlow and TensorFlow Addons
12
17
### REQUIREMENTS
13
18
14
19
- macOS 11.0+
20
+
- Python 3.8, available from the [Xcode Command Line Tools](https://developer.apple.com/download/more/?=command%20line%20tools).
15
21
16
22
### INSTALLATION
17
23
18
24
An archive containing Python packages and an installation script can be downloaded from the [releases](https://github.com/apple/tensorflow_macos/releases).
19
25
20
26
#### Details
21
27
22
-
- To quickly try this out, copy and paste the following into Terminal:
28
+
- To quickly try this out, copy and paste the following into Terminal:
This will verify your system, ask you for confirmation, then create a [virtual environment](https://docs.python.org/3.8/tutorial/venv.html) with TensorFlow for macOS installed.
28
33
29
-
- Alternatively, download the archive file from the [releases](https://github.com/apple/tensorflow_macos/releases). The archive contains an installation script,
30
-
accelerated versions of TensorFlow, TensorFlow Addons, and needed dependencies.
34
+
This will verify your system, ask you for confirmation, then create a virtual environment (https://docs.python.org/3.8/tutorial/venv.html) with TensorFlow for macOS installed.
35
+
36
+
- Alternatively, download the archive file from the [releases](https://github.com/apple/tensorflow_macos/releases). The archive contains an installation script, accelerated versions of TensorFlow, TensorFlow Addons, and needed dependencies.
31
37
32
38
#### Notes
33
39
34
40
For Macs with M1, the following packages are currently unavailable:
41
+
35
42
- SciPy and dependent packages
36
43
- Server/Client TensorBoard packages
37
44
38
45
### ISSUES AND FEEDBACK
39
46
40
-
Feedback is welcomed!
41
-
42
47
Please submit feature requests or report issues via [GitHub Issues](https://github.com/apple/tensorflow_macos/issues).
43
48
44
49
### ADDITIONAL INFORMATION
45
-
50
+
46
51
#### Device Selection (Optional)
47
52
48
53
It is not necessary to make any changes to your existing TensorFlow scripts to use ML Compute as a backend for TensorFlow and TensorFlow Addons.
49
54
50
55
There is an optional `mlcompute.set_mlc_device(device_name=’any')` API for ML Compute device selection. The default value for `device_name` is `'any’`, which means ML Compute will select the best available device on your system, including multiple GPUs on multi-GPU configurations. Other available options are `‘cpu’` and `‘gpu’`. Please note that in eager mode, ML Compute will use the CPU. For example, to choose the CPU device, you may do the following:
51
56
52
-
# Import mlcompute module to use the optional set_mlc_device API for device selection with ML Compute.
53
-
from tensorflow.python.compiler.mlcompute import mlcompute
57
+
```
58
+
# Import mlcompute module to use the optional set_mlc_device API for device selection with ML Compute.
59
+
from tensorflow.python.compiler.mlcompute import mlcompute
60
+
61
+
# Select CPU device.
62
+
mlcompute.set_mlc_device(device_name=‘cpu’) # Available options are 'cpu', 'gpu', and ‘any'.
63
+
```
64
+
65
+
66
+
#### Logs and Debugging
67
+
68
+
##### Graph mode
69
+
70
+
Logging provides more information about what happens when a TensorFlow model is optimized by ML Compute. Turn logging on by setting the environment variable `TF_MLC_LOGGING=1` when executing the model script. The following is the list of information that is logged in graph mode:
71
+
72
+
- Device used by ML Compute.
73
+
- Original TensorFlow graph without ML Compute.
74
+
- TensorFlow graph after TensorFlow operations have been replaced with ML Compute.
75
+
- Look for MLCSubgraphOp nodes in this graph. Each of these nodes replaces a TensorFlow subgraph from the original graph, encapsulating all the operations in the subgraph. This, for example, can be used to determine which operations are being optimized by ML Compute.
76
+
- Number of subgraphs using ML Compute and how many operations are included in each of these subgraphs.
77
+
- Having larger subgraphs that encapsulate big portions of the original graph usually results in better performance from ML Compute. Note that for training, there will usually be at least two MLCSubgraphOp nodes (representing forward and backward/gradient subgraphs).
78
+
- TensorFlow subgraphs that correspond to each of the ML Compute graphs.
79
+
80
+
81
+
##### Eager mode
82
+
83
+
Unlike graph mode, logging in eager mode is controlled by `TF_CPP_MIN_VLOG_LEVEL`. The following is the list of information that is logged in eager mode:
84
+
85
+
- The buffer pointer and shape of input/output tensor.
86
+
- The key for associating the tensor’s buffer to built the `MLCTraining` or `MLCInference` graph. This key is used to retrieve the graph and run a backward pass or an optimizer update.
87
+
- The weight tensor format.
88
+
- Caching statistics, such as insertions and deletions.
89
+
90
+
91
+
##### Tips for debugging
92
+
93
+
- Larger models being trained on the GPU may use more memory than is available, resulting in paging. If this happens, try decreasing the batch size or the number of layers.
94
+
- TensorFlow is multi-threaded, which means that different TensorFlow operations, such as` MLCSubgraphOp`, can execute concurrently. As a result, there may be overlapping logging information. To avoid this during the debugging process, set TensorFlow to execute operators sequentially by setting the number of threads to 1 (see [`tf.config.threading.set_inter_op_parallelism_threads`](https://www.tensorflow.org/api_docs/python/tf/config/threading/set_inter_op_parallelism_threads)).
95
+
96
+
##### Additional tips for debugging in eager mode:
97
+
98
+
- To find information about a specific tensor in the log, search for its buffer pointer in the log. If the tensor is defined by an operation that ML Compute does not support, you will need to cast it to `size_t` and search for it in log entries with the pattern `MemoryLogTensorAllocation ... true ptr: <(size_t)ptr>`. You may also need to modify the `OpKernelContext::input()` to print out the input pointer so that you can see the entire use-def chain in the log.
99
+
- You may disable the conversion of any eager operation to ML Compute by using `TF_DISABLE_MLC_EAGER=“;Op1;Op2;...”`. The gradient op may also need to be disabled by modifying the file `$PYTHONHOME/site-packages/tensorflow/python/ops/_grad.py` (this avoids TensorFlow recompilation).
100
+
- To initialize allocated memory with a specific value, use `TF_MLC_ALLOCATOR_INIT_VALUE=<init-value>`.
54
101
55
-
# Select CPU device.
56
-
mlcompute.set_mlc_device(device_name=‘cpu’) # Available options are 'cpu', 'gpu', and ‘any'.
error_exit "Python executable has CPU subtype arm64e; only arm64 CPU subtype is currently supported. Please use the Python version bundled in the Xcode Command Line Tools."
262
+
fi
246
263
247
264
248
265
# Print out confirmation of actions, run with it
@@ -302,7 +319,7 @@ fi
302
319
303
320
# Upgrade pip and base packages
304
321
echo">> Installing and upgrading base packages."
305
-
"$python_bin" -m pip install --upgrade pip wheel setuptools cached-property six
322
+
"$python_bin" -m pip install --force pip==20.2.4 wheel setuptools cached-property six
0 commit comments