Fujitsu Laboratories Ltd.

4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki, Kanagawa 211-8588, Japan Map
Tel: +81(44)754-2613

About Fujitsu

Fujitsu is the leading Japanese information and communication technology (ICT) company offering a full range of technology products, solutions and services. Approximately 126,000 Fujitsu people support customers in more than 100 countries. We use our experience and the power of ICT to shape the future of society with our customers. Fujitsu Limited (TSE:6702) reported consolidated revenues of 3.6 trillion yen (US$34 billion) for the fiscal year ended March 31, 2021. For more information, please see www.fujitsu.com

Research collaboration with CBMM

  1. Ian Mason, Anirban Sarkar, Tomotake Sasaki, and Xavier Boix. Modularity Trumps Invariance for Compositional Robustness. arXiv preprint, arXiv:2306.09005, 2023.
  2. Anirban Sarkar, Matthew Groth, Ian Mason, Tomotake Sasaki, and Xavier Boix. Deephys: Deep Electrophysiology, Debugging Neural Networks under Distribution Shifts. arXiv preprint, arXiv:2303.11912, 2023.
  3. Moyuru Yamada, Vanessa D’Amario, Kentaro Takemoto, Xavier Boix, and Tomotake Sasaki. Transformer Module Networks for systematic generalization in visual question answering. Technical Report CBMM Memo No.121, Ver.2, Center for Brains, Minds and Machines, 2023.
  4. Spandan Madan, Tomotake Sasaki, Hanspeter Pfister, Tzu-Mao Li, and Xavier Boix. Adversarial examples within the training distribution: A widespread challenge. arXiv preprint, arXiv:2106.16198v2, 2023.
  5. Shobhita Sundaram, Darius Sinha, Matthew Groth, Tomotake Sasaki, and Xavier Boix. Recurrent connections facilitate symmetry perception in deep networks. Scientific Reports, Vol.12, Article number: 20931, 2022.
  6. Akira Sakai, Taro Sunagawa, Spandan Madan, Kanata Suzuki, Takashi Katoh, Hiromichi Kobashi, Hanspeter Pfister, Pawan Sinha, Xavier Boix, and Tomotake Sasaki. Three approaches to facilitate invariant neurons and generalization to out-of-distribution orientations and illuminations. Neural Networks, Vol.155, Pages 119-143, 2022.
  7. Spandan Madan, Timothy Henry, Jamell Dozier, Helen Ho, Nishchal Bhandari, Tomotake Sasaki, Fredo Durand, Hanspeter Pfister, and Xavier Boix. When and how convolutional neural networks generalize to out-of-distribution category–viewpoint combinations. Nature Machine Intelligence, Vol.4, No.2, Pages 146–153, 2022.
  8. Vanessa D'Amario, Sanjana Srivastava, Tomotake Sasaki, and Xavier Boix. The Data efficiency of deep learning is degraded by unnecessary input dimensions. Frontiers in Computational Neuroscience, Vol.16, 2022.
  9. Akira Sakai, Taro Sunagawa, Spandan Madan, Kanata Suzuki, Takashi Katoh, Hiromichi Kobashi, Hanspeter Pfister, Pawan Sinha, Xavier Boix, and Tomotake Sasaki. Three approaches to facilitate DNN generalization to objects in out-of-distribution orientations and illuminations. Technical Report CBMM Memo No.119, Center for Brains, Minds and Machines, 2022. # Previous version of 6.
  10. Vanessa D’Amario, Tomotake Sasaki, and Xavier Boix. How modular should neural module networks be for systematic generalization? In Advances in Neural Information Processing Systems 34 (NeurIPS 2021), 2021.
  11. Avi Cooper, Xavier Boix, Daniel Harari, Spandan Madan, Hanspeter Pfister, Tomotake Sasaki, and Pawan Sinha. To which out-of-distribution object orientations are DNNs capable of generalizing? arXiv preprint, arXiv:2109.13445, 2021.
  12. Kimberly Villalobos, Vilim Stih, Amineh Ahmadinejad, Shobhita Sundaram, Jamell Dozier, Andrew Francl, Frederico Azevedo, Tomotake Sasaki, and Xavier Boix. Do neural networks for segmentation understand insideness? Neural Computation, Vol.33, No.9, Pages 2511-2549, 2021.
  13. Shobhita Sundaram, Darius Sinha, Matt Groth, and Xavier Boix. Recurrent connections facilitate learning symmetry perception. ICLR 2021 Workshop “Generalization beyond the training distribution in brains and machines”, 2021.
  14. Akira Sakai, Taro Sunagawa, Spandan Madan, Kanata Suzuki, Takashi Katoh, Hiromichi Kobashi, Hanspeter Pfister, Pawan Sinha, Xavier Boix, and Tomotake Sasaki. Treating spurious correlations with ENAMOR: Enforcing nuisance attributes to be mitigated on the representation. ICLR 2021 Workshop “Generalization beyond the training distribution in brains and machines”, 2021.
  15. Stephen Casper, Xavier Boix, Vanessa D’Amario, Ling Guo, Martin Schrimpf, Kasper Vinken, and Gabriel Kreiman. Frivolous units: Wider networks are not really that wide. In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21), pages 6921-6929, 2021.
  16. Spandan Madan, Timothy Henry, Jamell Dozier, Helen Ho, Nishchal Bhandari, Tomotake Sasaki, Fredo Durand, Hanspeter Pfister, and Xavier Boix. On the capability of neural networks to generalize to unseen category-pose combinations. Technical Report CBMM Memo No. 111, Center for Brains, Minds and Machines, 2020. # Previous version of 7.
  17. Kimberly Villalobos, Vilim Stih, Amineh Ahmadinejad, Shobhita Sundaram, Jamell Dozier, Andrew Francl, Frederico Azevedo, Tomotake Sasaki, and Xavier Boix. Do neural networks for segmentation understand insideness? Technical Report CBMM Memo No. 105, Center for Brains, Minds and Machines, 2020. # Previous version of 12.