Skip to content

Commit 507932c

Browse files
authored
Make Texar-PyTorch compatible with Texar-TF (#129)
* Move `texar` to `texar.torch` - Change all imports and module paths beginning with `texar` to `texar.torch` - Modify `setup.py` accordingly - Merged `DataBase._construct` methods with constructor to clear up duplicate code - Change `TextLineDataSource` to return tokenized text to improve performance * Fix docs & README
1 parent d212ab0 commit 507932c

File tree

192 files changed

+1837
-1977
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

192 files changed

+1837
-1977
lines changed

README.md

+10-7
Original file line numberDiff line numberDiff line change
@@ -41,14 +41,17 @@ Users can construct their own models at a high conceptual level just like assemb
4141
### Library API Example
4242
A code portion that builds a (self-)attentional sequence encoder-decoder model:
4343
```python
44-
import texar as tx
44+
import texar.torch as tx
4545

4646
class Seq2Seq(tx.ModuleBase):
4747
def __init__(self, data):
48-
self.embedder = tx.modules.WordEmbedder(data.target_vocab.size, hparams=hparams_emb)
49-
self.encoder = tx.modules.TransformerEncoder(hparams=hparams_encoder) # config through `hparams`
48+
self.embedder = tx.modules.WordEmbedder(
49+
data.target_vocab.size, hparams=hparams_emb)
50+
self.encoder = tx.modules.TransformerEncoder(
51+
hparams=hparams_encoder) # config through `hparams`
5052
self.decoder = tx.modules.AttentionRNNDecoder(
51-
input_size=self.embedder.dim
53+
token_embedder=self.embedder,
54+
input_size=self.embedder.dim,
5255
encoder_output_size=self.encoder.output_size,
5356
vocab_size=data.target_vocab.size,
5457
hparams=hparams_decoder)
@@ -59,17 +62,17 @@ class Seq2Seq(tx.ModuleBase):
5962
sequence_length=batch['source_length'])
6063

6164
outputs, _, _ = self.decoder(
62-
memory=output_enc,
65+
memory=outputs_enc,
6366
memory_sequence_length=batch['source_length'],
6467
helper=self.decoder.get_helper(decoding_strategy='train_greedy'),
65-
inputs=self.embedder(batch['target_text_ids']),
68+
inputs=batch['target_text_ids'],
6669
sequence_length=batch['target_length']-1)
6770

6871
# Loss for maximum likelihood learning
6972
loss = tx.losses.sequence_sparse_softmax_cross_entropy(
7073
labels=batch['target_text_ids'][:, 1:],
7174
logits=outputs.logits,
72-
sequence_length=batch['target_length']-1) # Automatic masking
75+
sequence_length=batch['target_length']-1) # Automatic masking
7376

7477
return loss
7578

docs/code/core.rst

+37-37
Original file line numberDiff line numberDiff line change
@@ -10,44 +10,44 @@ Attention Mechanism
1010

1111
:hidden:`AttentionWrapperState`
1212
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
13-
.. autoclass:: texar.core.AttentionWrapperState
13+
.. autoclass:: texar.torch.core.AttentionWrapperState
1414
:members:
1515

1616
:hidden:`LuongAttention`
1717
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
18-
.. autoclass:: texar.core.LuongAttention
18+
.. autoclass:: texar.torch.core.LuongAttention
1919
:members:
2020

2121
:hidden:`BahdanauAttention`
2222
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
23-
.. autoclass:: texar.core.BahdanauAttention
23+
.. autoclass:: texar.torch.core.BahdanauAttention
2424
:members:
2525

2626
:hidden:`BahdanauMonotonicAttention`
2727
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
28-
.. autoclass:: texar.core.BahdanauMonotonicAttention
28+
.. autoclass:: texar.torch.core.BahdanauMonotonicAttention
2929
:members:
3030

3131
:hidden:`LuongMonotonicAttention`
3232
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
33-
.. autoclass:: texar.core.LuongMonotonicAttention
33+
.. autoclass:: texar.torch.core.LuongMonotonicAttention
3434
:members:
3535

3636
:hidden:`compute_attention`
3737
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
38-
.. autofunction:: texar.core.compute_attention
38+
.. autofunction:: texar.torch.core.compute_attention
3939

4040
:hidden:`monotonic_attention`
4141
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
42-
.. autofunction:: texar.core.monotonic_attention
42+
.. autofunction:: texar.torch.core.monotonic_attention
4343

4444
:hidden:`hardmax`
4545
~~~~~~~~~~~~~~~~~~~
46-
.. autofunction:: texar.core.hardmax
46+
.. autofunction:: texar.torch.core.hardmax
4747

4848
:hidden:`sparsemax`
4949
~~~~~~~~~~~~~~~~~~~
50-
.. autofunction:: texar.core.sparsemax
50+
.. autofunction:: texar.torch.core.sparsemax
5151

5252

5353

@@ -56,54 +56,54 @@ Cells
5656

5757
:hidden:`default_rnn_cell_hparams`
5858
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
59-
.. autofunction:: texar.core.default_rnn_cell_hparams
59+
.. autofunction:: texar.torch.core.default_rnn_cell_hparams
6060

6161
:hidden:`get_rnn_cell`
6262
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
63-
.. autofunction:: texar.core.get_rnn_cell
63+
.. autofunction:: texar.torch.core.get_rnn_cell
6464

6565
:hidden:`wrap_builtin_cell`
6666
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
67-
.. autofunction:: texar.core.wrap_builtin_cell
67+
.. autofunction:: texar.torch.core.wrap_builtin_cell
6868

6969
:hidden:`RNNCellBase`
7070
~~~~~~~~~~~~~~~~~~~~~~~~~~~
71-
.. autoclass:: texar.core.cell_wrappers.RNNCellBase
71+
.. autoclass:: texar.torch.core.cell_wrappers.RNNCellBase
7272
:members:
7373

7474
:hidden:`RNNCell`
7575
~~~~~~~~~~~~~~~~~~~~~~~~~~~
76-
.. autoclass:: texar.core.cell_wrappers.RNNCell
76+
.. autoclass:: texar.torch.core.cell_wrappers.RNNCell
7777
:members:
7878

7979
:hidden:`GRUCell`
8080
~~~~~~~~~~~~~~~~~~~~~~~~~~~
81-
.. autoclass:: texar.core.cell_wrappers.GRUCell
81+
.. autoclass:: texar.torch.core.cell_wrappers.GRUCell
8282
:members:
8383

8484
:hidden:`LSTMCell`
8585
~~~~~~~~~~~~~~~~~~~~~~~~~~~
86-
.. autoclass:: texar.core.cell_wrappers.LSTMCell
86+
.. autoclass:: texar.torch.core.cell_wrappers.LSTMCell
8787
:members:
8888

8989
:hidden:`DropoutWrapper`
9090
~~~~~~~~~~~~~~~~~~~~~~~~~~~
91-
.. autoclass:: texar.core.cell_wrappers.DropoutWrapper
91+
.. autoclass:: texar.torch.core.cell_wrappers.DropoutWrapper
9292
:members:
9393

9494
:hidden:`ResidualWrapper`
9595
~~~~~~~~~~~~~~~~~~~~~~~~~~~
96-
.. autoclass:: texar.core.cell_wrappers.ResidualWrapper
96+
.. autoclass:: texar.torch.core.cell_wrappers.ResidualWrapper
9797
:members:
9898

9999
:hidden:`HighwayWrapper`
100100
~~~~~~~~~~~~~~~~~~~~~~~~~~~
101-
.. autoclass:: texar.core.cell_wrappers.HighwayWrapper
101+
.. autoclass:: texar.torch.core.cell_wrappers.HighwayWrapper
102102
:members:
103103

104104
:hidden:`MultiRNNCell`
105105
~~~~~~~~~~~~~~~~~~~~~~~~~~~
106-
.. autoclass:: texar.core.cell_wrappers.MultiRNNCell
106+
.. autoclass:: texar.torch.core.cell_wrappers.MultiRNNCell
107107
:members:
108108

109109
:hidden:`AttentionWrapper`
@@ -112,7 +112,7 @@ Cells
112112
Luong
113113
Bahdanau
114114

115-
.. autoclass:: texar.core.cell_wrappers.AttentionWrapper
115+
.. autoclass:: texar.torch.core.cell_wrappers.AttentionWrapper
116116
:members:
117117

118118

@@ -122,75 +122,75 @@ Layers
122122

123123
:hidden:`get_layer`
124124
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
125-
.. autofunction:: texar.core.get_layer
125+
.. autofunction:: texar.torch.core.get_layer
126126

127127
:hidden:`MaxReducePool1d`
128128
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
129-
.. autoclass:: texar.core.MaxReducePool1d
129+
.. autoclass:: texar.torch.core.MaxReducePool1d
130130
:members:
131131

132132
:hidden:`AvgReducePool1d`
133133
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
134-
.. autoclass:: texar.core.AvgReducePool1d
134+
.. autoclass:: texar.torch.core.AvgReducePool1d
135135
:members:
136136

137137
:hidden:`get_pooling_layer_hparams`
138138
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
139-
.. autofunction:: texar.core.get_pooling_layer_hparams
139+
.. autofunction:: texar.torch.core.get_pooling_layer_hparams
140140

141141
:hidden:`MergeLayer`
142142
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
143-
.. autoclass:: texar.core.MergeLayer
143+
.. autoclass:: texar.torch.core.MergeLayer
144144
:members:
145145

146146
:hidden:`Flatten`
147147
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
148-
.. autoclass:: texar.core.Flatten
148+
.. autoclass:: texar.torch.core.Flatten
149149
:members:
150150
:exclude-members: forward
151151

152152
:hidden:`Identity`
153153
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
154-
.. autoclass:: texar.core.Identity
154+
.. autoclass:: texar.torch.core.Identity
155155
:members:
156156
:exclude-members: forward
157157

158158
:hidden:`default_regularizer_hparams`
159159
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
160-
.. autofunction:: texar.core.default_regularizer_hparams
160+
.. autofunction:: texar.torch.core.default_regularizer_hparams
161161

162162
:hidden:`get_regularizer`
163163
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
164-
.. autofunction:: texar.core.get_regularizer
164+
.. autofunction:: texar.torch.core.get_regularizer
165165

166166
:hidden:`get_initializer`
167167
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
168-
.. autofunction:: texar.core.get_initializer
168+
.. autofunction:: texar.torch.core.get_initializer
169169

170170
:hidden:`get_activation_fn`
171171
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
172-
.. autofunction:: texar.core.get_activation_fn
172+
.. autofunction:: texar.torch.core.get_activation_fn
173173

174174

175175
Optimization
176176
=============
177177

178178
:hidden:`default_optimization_hparams`
179179
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
180-
.. autofunction:: texar.core.default_optimization_hparams
180+
.. autofunction:: texar.torch.core.default_optimization_hparams
181181

182182
:hidden:`get_train_op`
183183
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
184-
.. autofunction:: texar.core.get_train_op
184+
.. autofunction:: texar.torch.core.get_train_op
185185

186186
:hidden:`get_scheduler`
187187
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
188-
.. autofunction:: texar.core.get_scheduler
188+
.. autofunction:: texar.torch.core.get_scheduler
189189

190190
:hidden:`get_optimizer`
191191
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
192-
.. autofunction:: texar.core.get_optimizer
192+
.. autofunction:: texar.torch.core.get_optimizer
193193

194194
:hidden:`get_grad_clip_fn`
195195
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
196-
.. autofunction:: texar.core.get_grad_clip_fn
196+
.. autofunction:: texar.torch.core.get_grad_clip_fn

0 commit comments

Comments
 (0)