The process of how human brains think is a tool that makes us unique. A deep understanding of the human brain and how it forms thoughts can used as a model for AI enthusiasts to make systems, that imitate human thinking processes. Theoretically is possible to make an AI-based morphing neural network system. That operates as a virtual quantum computer. This means that the next-generation cloud-based architecture can be the new way to turn morphing neural networks into new types of cloud-based solutions.
The morphing neural network-type clouds can make great advances for data security and it also can break data security. The morphing neural network that shares information between multiple actors can look like a regular cloud-based system. That system can compare information that users and other sensors drive into it. Neural networks see if some way to write the passwords and usernames is simultaneously repeating in the cases, where their access is denied. The system can see if a certain password is written in multiple places in a short period.
Or somebody tests the entry to the system, using some old password. The problem with conventional firewalls and other conventional data security systems is this: they don't report if somebody tries to log in using some old password. That is given to some consultant. In that case, the system should tell the users or supervisors that some people use old, deactivated passwords, whose owners are not at work anymore. In that case, some ex-worker might sell the access to the system.
The morphing neural network can run the complicated AI-based interactive system. The data security specialists can have the ChatGPT style tool. They can be used to secure the networks. The problem is that, Maybe quite soon also attackers have access to the ChatGPT-style AI tools that can generate viruses and malware code. The attackers that operate this kind of toolkit with advanced AI-based systems and morphing neural networks can make systematic attacks using different computers all the time. That makes it hard to deny the attack by closing access from certain IP addresses.
The AI can search data from multiple sources. It can also make attacks by testing the public data that it collects from the net. The operator must only give orders to the system to start an attack against targeted systems. The system can take orders from the form where the user puts the company name, the name, and the IP address of the contact form. The username is normally easy to guess. And the only problem is this: where the attackers can be sure that some person is a worker who has access to the system.
The AI can see the license plates of cars in the company parking lots. And then if it has access to the right systems the AI can search those cars from images. Then the attacker can go to look at the name from the mailbox, or just google that address. If the targeted person has own company at that address the attacker can easily get the name of that person.
Then the AI can search targeted company workers and their personal data like dog names or dog breeds and easy variables. If a person's hobbies are things like Marvel cartoons, the AI can use those names and common variables of those words. The AI can tell if there are some anomalies in the access attempts. And if there rise in a curve that shows non-successful login attempts, that thing should be told to the data security team.
The thing. In the next-generation data security, the system should tell operators if there are some kind of anomalies in the system. The users must dare to tell if somebody asks questions about working places. Also, things like WEB cameras and images that they send must be checked, so that things like weapons and other interesting things are not visible.
Comments
Post a Comment