rootxhacker commited on
Commit
a2c2bbc
·
verified ·
1 Parent(s): 0fad99c

Training checkpoint 184000 - 1.9B tokens

Browse files
Files changed (3) hide show
  1. README.md +6 -6
  2. checkpoint_184000.pt +3 -0
  3. config.json +7 -7
README.md CHANGED
@@ -18,11 +18,11 @@ pipeline_tag: text-generation
18
 
19
  ## Current Training Status
20
 
21
- - **Training Step**: 182,000
22
- - **Tokens Processed**: 1.86B tokens
23
- - **Current Loss**: 4.7050
24
  - **Spike Rate**: 0.0504
25
- - **Learning Rate**: 1.94e-04
26
 
27
  ## Model Architecture
28
 
@@ -54,7 +54,7 @@ This represents ongoing training of the first large-scale spiking neural network
54
  from huggingface_hub import hf_hub_download
55
  checkpoint = hf_hub_download(
56
  repo_id="rootxhacker/piking-llm-5b-3epochs-exp",
57
- filename="checkpoint_182000.pt"
58
  )
59
 
60
  # Load with custom spiking model code
@@ -65,4 +65,4 @@ checkpoint = hf_hub_download(
65
 
66
  **🔬 This is live research in progress! Check back for updates as training continues.**
67
 
68
- **Training Progress**: 12.4% complete towards 15B tokens
 
18
 
19
  ## Current Training Status
20
 
21
+ - **Training Step**: 184,000
22
+ - **Tokens Processed**: 1.88B tokens
23
+ - **Current Loss**: 4.5155
24
  - **Spike Rate**: 0.0504
25
+ - **Learning Rate**: 9.64e-06
26
 
27
  ## Model Architecture
28
 
 
54
  from huggingface_hub import hf_hub_download
55
  checkpoint = hf_hub_download(
56
  repo_id="rootxhacker/piking-llm-5b-3epochs-exp",
57
+ filename="checkpoint_184000.pt"
58
  )
59
 
60
  # Load with custom spiking model code
 
65
 
66
  **🔬 This is live research in progress! Check back for updates as training continues.**
67
 
68
+ **Training Progress**: 12.6% complete towards 15B tokens
checkpoint_184000.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed0691238b5c0f1be6bbb1d23639fb81f26d5d6a361718a35f1541ab67a61d2e
3
+ size 999026730
config.json CHANGED
@@ -4,11 +4,11 @@
4
  "hidden_size": 768,
5
  "num_layers": 12,
6
  "max_seq_length": 1024,
7
- "training_step": 182000,
8
- "tokens_processed": 1863680000,
9
- "loss": 4.704965196601672,
10
- "spike_rate": 0.0504144437587055,
11
- "learning_rate": 0.00019447220794276915,
12
- "epoch": 0.372736,
13
- "progress_percent": 12.424565140220093
14
  }
 
4
  "hidden_size": 768,
5
  "num_layers": 12,
6
  "max_seq_length": 1024,
7
+ "training_step": 184000,
8
+ "tokens_processed": 1884160000,
9
+ "loss": 4.5154644428033786,
10
+ "spike_rate": 0.050425242981711645,
11
+ "learning_rate": 9.644922584877327e-06,
12
+ "epoch": 0.376832,
13
+ "progress_percent": 12.561066666666667
14
  }