Text Generation
Transformers
Safetensors
English
mixtral
conversational
text-generation-inference
4-bit precision
gptq

Malware api call sequences embeddings

#1
by Akimbofmg9 - opened

Hello,

I have a list of api call sequences from emulated malware logs, I want to classify the call sequences with their arguments, so I need embeddings. I chose this as this model can code.

I plan on using these embeddings in a Graph transformer, but my question is, how do i query get the model to generate embeddings for me.

Im using the feature extraction pipeline, but the issue is the context length, I have strings of API calls more than 3k in count. How would I make it work?
I'm pretty new to this so please dumb it down a little.
Any clues?

I'm trying to run the Mixtral models like dolphin here with the provided code, but it seems that it is not yet supported in transformers. I upgraded to the latest version as suggested (transformers-4.37.0.dev0) but it still gives me the following error
File /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:566,
ValueError(
570 f"Unrecognized configuration class {config.class} for this kind of AutoModel: {cls.name}.\n"
571 f"Model type should be one of {', '.join(c.name for c in cls._model_mapping.keys())}."
572 )
...
/opt/conda/lib/python3.10/site-packages/auto_gptq/nn_modules/qlinear/qlinear_exllama.py:68, in QuantLinear.init(self, bits, group_size, infeatures, outfeatures, bias, trainable, **kwargs)
66 assert infeatures % 32 == 0
67 assert infeatures % self.group_size == 0
---> 68 assert outfeatures % 32 == 0
70 self.register_buffer(
71 'qweight',
72 torch.zeros((infeatures // 32 * self.bits, outfeatures), dtype=torch.int32)
73 )
74 self.register_buffer(
75 'qzeros',
76 torch.zeros((math.ceil(infeatures / self.group_size), outfeatures // 32 * self.bits), dtype=torch.int32)
77 )

AssertionError:

Sign up or log in to comment