Skip to content
GitLab
Menu
Projects
Groups
Snippets
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Sign in
Toggle navigation
Menu
Open sidebar
Sparse Representation Learning
lth
Commits
ca783c0e
Commit
ca783c0e
authored
Apr 25, 2022
by
Benjamin Vandersmissen
Browse files
Added the divide-by-sum importance measure and removed the normalize-by-std importance measure.
parent
182b01e9
Changes
1
Hide whitespace changes
Inline
Side-by-side
code/pruning.py
View file @
ca783c0e
import
torch
import
copy
import
math
import
os
import
numpy
as
np
...
...
@@ -27,15 +25,8 @@ def softmax(inp):
return
torch
.
softmax
(
inp
,
dim
=-
1
)
def
normalize_std
(
inp
):
"""
Divides by std of the initialization,
but because the std is only variable in the term math.sqrt(1/float(fan_in + fan_out)),
we divide by that instead
"""
# TODO: this is only for GLOROT UNIFORM, implement the others when needed
fan_in
,
fan_out
=
torch
.
nn
.
init
.
_calculate_fan_in_and_fan_out
(
inp
)
return
inp
/
math
.
sqrt
(
1
/
float
(
fan_in
+
fan_out
))
def
normalize_sum
(
inp
):
return
inp
/
inp
.
sum
()
def
prune_by_magnitude
(
model
:
torch
.
nn
.
Module
,
percentage
=
0.2
,
transformation
=
identity
):
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment