Skip to content

Commit 0b38ae2

Browse files
authored
Merge pull request #542 from yahoo/leewyang_sd
migrate build from travis to screwdriver
2 parents 2134791 + f566f25 commit 0b38ae2

24 files changed

+308
-97
lines changed

.travis.yml

Lines changed: 0 additions & 35 deletions
This file was deleted.

README.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,10 @@ Please see LICENSE file in the project root for terms.
77
> _TensorFlowOnSpark brings scalable deep learning to Apache Hadoop and Apache Spark
88
clusters._
99

10-
[![Build Status](https://travis-ci.org/yahoo/TensorFlowOnSpark.svg?branch=master)](https://travis-ci.org/yahoo/TensorFlowOnSpark) [![PyPI version](https://badge.fury.io/py/tensorflowonspark.svg)](https://badge.fury.io/py/tensorflowonspark)
10+
[![Build Status](https://cd.screwdriver.cd/pipelines/6384/badge?nocache=true)](https://cd.screwdriver.cd/pipelines/6384)
11+
[![Package](https://img.shields.io/badge/package-pypi-blue.svg)](https://pypi.org/project/tensorflowonspark/)
12+
[![Downloads](https://img.shields.io/pypi/dm/tensorflowonspark.svg)](https://img.shields.io/pypi/dm/tensorflowonspark.svg)
13+
[![Documentation](https://img.shields.io/badge/Documentation-latest-blue.svg)](https://yahoo.github.io/TensorFlowOnSpark/)
1114

1215
By combining salient features from the [TensorFlow](https://www.tensorflow.org) deep learning framework with [Apache Spark](http://spark.apache.org) and [Apache Hadoop](http://hadoop.apache.org), TensorFlowOnSpark enables distributed
1316
deep learning on a cluster of GPU and CPU servers.

docs/source/conf.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,9 +28,9 @@
2828
author = 'Yahoo Inc'
2929

3030
# The short X.Y version
31-
version = '2.2.1'
31+
version = '2.2.2'
3232
# The full version, including alpha/beta/rc tags
33-
release = '2.2.1'
33+
release = '2.2.2'
3434

3535

3636
# -- General configuration ---------------------------------------------------

pom.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
<modelVersion>4.0.0</modelVersion>
66
<groupId>com.yahoo.ml</groupId>
77
<artifactId>tensorflowonspark</artifactId>
8-
<version>2.2.0-SNAPSHOT</version>
8+
<version>2.2.2-SNAPSHOT</version>
99
<packaging>jar</packaging>
1010
<name>tensorflowonspark</name>
1111
<description>Spark Scala inferencing for TensorFlowOnSpark</description>

requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ h5py>=2.9.0
22
numpy>=1.14.0
33
packaging
44
py4j==0.10.7
5-
pyspark==2.4.5
5+
pyspark==2.4.7
66
scipy
77
setuptools>=41.0.0
88
sphinx

screwdriver.yaml

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
# Copyright 2017, Verizon Inc.
2+
# Licensed under the terms of the apache license. See the LICENSE file in the project root for terms
3+
4+
version: 4
5+
shared:
6+
environment:
7+
PACKAGE_DIRECTORY: tensorflowonspark
8+
SPARK_HOME: ${SD_ROOT_DIR}/spark
9+
TOX_ARGS: '--verbose'
10+
TOX_ENVLIST: py37
11+
annotations:
12+
screwdriver.cd/cpu: HIGH
13+
screwdriver.cd/ram: HIGH
14+
15+
jobs:
16+
validate_test:
17+
template: python/validate_unittest
18+
requires: [~commit, ~pr]
19+
steps:
20+
- prevalidate_code: |
21+
source scripts/install_spark.sh
22+
23+
validate_lint:
24+
template: python/validate_lint
25+
requires: [~commit, ~pr]
26+
27+
validate_codestyle:
28+
template: python/validate_codestyle
29+
requires: [~commit, ~pr]
30+
31+
validate_safetydb:
32+
template: python/validate_safety
33+
requires: [~commit, ~pr]
34+
35+
# validate_security:
36+
# template: python/validate_security
37+
# requires: [~commit, ~pr]
38+
39+
publish_test_pypi:
40+
template: python/package_python
41+
environment:
42+
PUBLISH: True
43+
TWINE_REPOSITORY_URL: https://test.pypi.org/legacy/
44+
requires: [validate_test, validate_lint, validate_codestyle, validate_safetydb]
45+
steps:
46+
- update_version: |
47+
echo 'using version from setup.cfg'
48+
49+
publish_pypi:
50+
template: python/package_python
51+
environment:
52+
PUBLISH: True
53+
requires: [publish_test_pypi]
54+
steps:
55+
- update_version: |
56+
echo 'using version from setup.cfg'

scripts/install_spark.sh

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
#!/bin/bash -x
2+
3+
# Install JDK8
4+
yum install -y java-1.8.0-openjdk
5+
export JAVA_HOME=/usr/lib/jvm/jre-1.8.0
6+
7+
# Install Spark
8+
export SPARK_VERSION=2.4.7
9+
curl -LO http://www-us.apache.org/dist/spark/spark-${SPARK_VERSION}/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz
10+
mkdir $SPARK_HOME
11+
tar -xf spark-${SPARK_VERSION}-bin-hadoop2.7.tgz -C $SPARK_HOME --strip-components=1

scripts/start_spark.sh

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
#!/bin/bash -x
2+
#export SPARK_HOME=/opt/spark
3+
#export SPARK_LOCAL_IP=127.0.0.1
4+
#export PATH=$SPARK_HOME/bin:$PATH
5+
#
6+
## Start Spark Standalone Cluster
7+
#export SPARK_CLASSPATH=./lib/tensorflow-hadoop-1.0-SNAPSHOT.jar
8+
#export MASTER=spark://$(hostname):7077
9+
#export SPARK_WORKER_INSTANCES=2; export CORES_PER_WORKER=1
10+
#export TOTAL_CORES=$((${CORES_PER_WORKER}*${SPARK_WORKER_INSTANCES}))
11+
12+
${SPARK_HOME}/sbin/start-master.sh; ${SPARK_HOME}/sbin/start-slave.sh -c ${CORES_PER_WORKER} -m 1G ${MASTER}

scripts/stop_spark.sh

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
#!/bin/bash -x
2+
3+
${SPARK_HOME}/sbin/stop-slave.sh; ${SPARK_HOME}/sbin/stop-master.sh

scripts/travis_before_install.sh

Lines changed: 0 additions & 28 deletions
This file was deleted.

sd.allow

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
version: 1
2+
push:
3+
- screwdriver:6384

setup.cfg

Lines changed: 72 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,76 @@
1+
# Copyright 2017, Verizon Inc.
2+
# Licensed under the terms of the apache license. See the LICENSE file in the project root for terms
13
[metadata]
2-
description-file = README.md
4+
author = Lee Yang
5+
author_email = [email protected]
6+
classifiers =
7+
Intended Audience :: Developers
8+
Intended Audience :: Science/Research
9+
License :: OSI Approved :: Apache Software License
10+
Topic :: Software Development :: Libraries
11+
Programming Language :: Python :: 3 :: Only
12+
Programming Language :: Python :: 3.6
13+
Programming Language :: Python :: 3.7
14+
Programming Language :: Python :: 3.8
15+
description = Deep learning with TensorFlow on Apache Spark clusters
16+
license = Apache 2.0
17+
long_description = file:README.md
18+
long_description_content_type = text/markdown
19+
name = tensorflowonspark
20+
url = https://github.com/yahoo/TensorFlowOnSpark
21+
version = 2.2.2
22+
23+
[options]
24+
packages =
25+
tensorflowonspark
26+
27+
# The install_requires should include abstract package dependencies
28+
# here (do not specify specific versions)
29+
30+
install_requires =
31+
setuptools>38.0
32+
33+
# By default new packages require at minimum the current supported Python release.
34+
python_requires = >="3.6"
35+
zip_safe = True
36+
37+
[options.extras_require]
38+
# This config section allows you to define optional dependencies. For the general case, the defaults will
39+
# work fine. So these settings aren't required. However, many of the screwdriver CI Pipeline steps
40+
# will install the appropriate extras for that step. This makes it possible to install packages that install
41+
# or enhance the functionality of the CI Pipeline step.
42+
# Such as packages that implement plugins or themes for the step in question.
43+
44+
# Additional packages for testing (test step)
45+
# test =
46+
47+
# Additonal packages needed for documentation generation (doc_build/doc_publish steps)
48+
# If you want to use a sphinx theme from a package, list it here.
49+
# doc_build =
50+
51+
# Additional packages needed for mypy type checking
52+
# mypy =
53+
54+
# Additional packages needed for pep8/pycodestyle style checking
55+
# pycodestyle =
56+
57+
# Additional packages needed for pylint code analysis
58+
# pylint =
59+
60+
[options.entry_points]
61+
# Console script entry points are used to create wrapper scripts that run a specific function, the resulting wrapper
62+
# is installed in the bin directory.
63+
64+
# They are defined using the following format:
65+
# scriptname = modulename:function
66+
# console_scripts =
67+
# TFoS=ouroath.TFoS.cli:main
68+
69+
[screwdrivercd.version]
70+
# Base the autoversion build number on the screwdriver build number
71+
# This requires the CI Pipeline to have a build step that runs before
72+
# any packaging steps.
73+
version_type = sdv4_SD_BUILD
374

475
[bdist_wheel]
576
universal = 1

setup.py

Lines changed: 24 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -1,29 +1,26 @@
1-
from setuptools import setup
1+
#!/usr/bin/env python
2+
# Copyright 2017, Verizon Inc.
3+
# Licensed under the terms of the apache license. See the LICENSE file in the project root for terms
4+
"""
5+
Package setup file for python module 'tensorflowonspark'
6+
"""
7+
import setuptools
8+
import sys
29

3-
with open('README.md') as f:
4-
long_description = f.read()
510

6-
setup(
7-
name='tensorflowonspark',
8-
packages=['tensorflowonspark'],
9-
version='2.2.1',
10-
description='Deep learning with TensorFlow on Apache Spark clusters',
11-
long_description=long_description,
12-
long_description_content_type='text/markdown',
13-
author='Yahoo, Inc.',
14-
url='https://github.com/yahoo/TensorFlowOnSpark',
15-
keywords=['tensorflowonspark', 'tensorflow', 'spark', 'machine learning', 'yahoo'],
16-
install_requires=['packaging'],
17-
license='Apache 2.0',
18-
classifiers=[
19-
'Intended Audience :: Developers',
20-
'Intended Audience :: Science/Research',
21-
'License :: OSI Approved :: Apache Software License',
22-
'Topic :: Software Development :: Libraries',
23-
'Programming Language :: Python :: 2',
24-
'Programming Language :: Python :: 2.7',
25-
'Programming Language :: Python :: 3',
26-
'Programming Language :: Python :: 3.5',
27-
'Programming Language :: Python :: 3.6'
28-
]
29-
)
11+
def setuptools_version_supported():
12+
major, minor, patch = setuptools.__version__.split('.')
13+
if int(major) > 38:
14+
return True
15+
return False
16+
17+
18+
if __name__ == '__main__':
19+
# Check for a working version of setuptools here because earlier versions did not
20+
# support python_requires.
21+
if not setuptools_version_supported():
22+
print('Setuptools version 38.0.0 or higher is needed to install this package')
23+
sys.exit(1)
24+
25+
# We're being run from the command line so call setup with our arguments
26+
setuptools.setup()

tensorflowonspark/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,4 @@
22

33
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s (%(threadName)s-%(process)d) %(message)s")
44

5-
__version__ = "2.2.1"
5+
__version__ = "2.2.2"
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.

0 commit comments

Comments
 (0)