Robert Vanwey is a distinguished former law enforcement senior technical analyst and investigator, renowned for his expertise within the New York State Division of Criminal Justice. With a comprehensive grasp of video and audio evidence examination and adeptness in open-source intelligence investigations, he has contributed significantly to the field.
Robert Vanweyis a certified criminal analyst and carries numerous other certifications, notably in popular forensic tools.
Recently transitioning to academia, Robert Vanwey has been teaching Ethical Hacking and Cybersecurity at Softwarica College, following a successful tenure in law enforcement where he specialised in training across a diverse array of skills, particularly emphasising ethical hacking and digital forensics.
Onlinekhabar caught up with him to talk about ethical hacking, its scopes and careers as well as share his insights on the new forms of threats that technology has created.
Excerpts:
As an academician, what do you think is the status of education in Nepal in terms of ethical hacking and its scope?
It is evolving, despite the challenges with old perceptions and (mis)understanding. However, the idea faces the hurdle of misconception due to media portrayal, in movies and TV, often associating hacking solely with malicious intent. So the initial challenge is still elucidating the essence of ethical hacking, emphasising its crucial role in identifying vulnerabilities proactively to fortify systems against potential threats, from the beginning of the programme.
The academic landscape sees a diverse mix of students attracted to the multidisciplinary nature of ethical hacking. It demands a varied skill set encompassing networking, systems, operating systems, and problem-solving abilities. However, this diversity also poses a challenge, requiring simultaneous mastery of a wide array of skills. Academic institutions aim to provide clarity to students regarding the program’s requirements and empower them to transition to other fields if needed, leveraging the comprehensive skill set acquired.
Its core purpose lies in fortifying systems, not only against malicious attacks but also to ensure operational continuity amidst unforeseen disruptions like power outages or network failures, especially in critical institutions like hospitals or businesses.
Ultimately, ethical hacking education in Nepal presents a pathway rich in versatility, offering graduates a broad skill spectrum adaptable to diverse career trajectories beyond cybersecurity. Communicating the societal significance of ethical hacking remains pivotal. Students have exhibited growing interest, yet parental reservations, stemming from unfamiliarity, exist.
Is digital forensics different from ethical hacking? Do people understand what digital forensics is, or it’s very limited?
Digital forensics differs from ethical hacking but suffers from limited understanding in the public domain. Unlike hacking, digital forensics lacks visibility in discussions, showcasing only the beginning and end stages without detailing the process. However, it still demands a broad knowledge base for successful execution.
In essence, digital forensics involves extracting information from devices or systems to uncover answers, aiding in criminal investigations, analysing data breaches, or troubleshooting complex network issues. However, comprehending digital forensics necessitates familiarity with operating systems, security protocols, and networks.
Students typically have less awareness of digital forensics as a field, yet it offers a relatively straightforward starting point, especially with foundational knowledge of operating systems, networks, and devices. Educational institutions, like Softwarica, provide preliminary courses and workshops to introduce aspiring students to the requirements and fundamentals of digital forensics, enabling an initial understanding of this field’s demands and potential.
Based on what kind of technological transformations the world has seen, do you think Nepal is producing enough skilled human resources (IT professionals, ethical hackers, digital forensic experts)?
Nepal has demonstrated the capacity to produce skilled resources, especially in ethical hacking as students from Nepal, particularly from institutions like Softwarica, have excelled globally, showcasing high competence levels. Comparing Nepali students to their international counterparts, from Coventry University in the UK, there are instances where Nepali students outperformed them. This has highlighted the calibre of Nepali students in ethical hacking while others have achieved success in national and international competitions and secured prestigious job placements.
In ethical hacking specifically, Nepal has proven its ability to nurture and produce exceptionally talented individuals. Students from institutions like Softwarica exhibit competitiveness on a global scale, reflecting Nepal’s potential to contribute skilled IT professionals and ethical hackers to the world.
Even before Nepal started academic learning in the sector, there have been a few self-taught individuals. What are the pros and cons of academic learning in this sector?
Self-taught individuals in technology can possess unique strengths, displaying exceptional skills in specific areas. However, academic learning in this sector offers structured progression and a comprehensive understanding of various aspects, enhancing efficiency and knowledge depth.
In academic settings, students benefit from the guidance of experienced instructors, collaborative learning environments, and a well-rounded education that facilitates a broader understanding of interdisciplinary concepts, preparing individuals to adapt to diverse challenges within the industry. The academic approach fosters a community where individuals learn from peers and mentors, gaining insights that self-teaching might not provide. They also get a chance to create a community and network for the future.
Those who have studied outside, trained themselves, gained experience and then come back to work here, have done exceptionally well as they have wider perspectives and understanding.
Nepal is seeing a lot of cybercrimes lately. Given the changing digital landscape and the nature of cybercrimes (cyberbullying, gender-based violence and abuse, blackmailing, extortion), what would you recommend Nepal focuses on for the mitigation?
Nepal is facing the same issue that many places either currently are or have previously and sort of found their way out. Addressing cyberbullying remains a challenge globally, as it’s deeply intertwined with societal norms and behaviour.
While no definitive solution exists, promoting empathy, respect, and responsible online conduct can help mitigate these issues with multifaceted approaches. Ultimately, creating awareness, education, and fostering a responsible digital culture is pivotal in combating cybercrimes in Nepal. Nepal’s advantage lies in its smaller size, facilitating a quicker societal learning curve compared to larger nations.
Educating individuals about cyber risks, especially in schools through comprehensive software education, can empower students to disseminate this knowledge to their families. Fostering a culture of responsible digital behaviour is crucial. Encouraging individuals, especially young people, to exercise caution before sharing personal information or images online is vital. Emphasising the notion that anything shared digitally should be content that one would be comfortable showing to a stranger.
But in today’s age when deep fake videos are surfacing more and more and creating so much of an issue, do you think it is enough? How do you see the problem of deep fake and AI videos?
Deepfake videos pose a significant challenge, primarily at larger scales rather than individual levels. Presently, creating high-quality deep fakes remains complex, often targeting celebrities or manipulating notable events rather than targeting individuals. The critical concern lies in whether it will become easier to create deep fakes or to detect and prevent them.
At the current pace, creating deep fakes might outpace detection capabilities, raising concerns about the authenticity of content and potential misuse for misinformation. Efforts are underway to develop detection mechanisms, yet the challenge persists. Government intervention may be crucial in establishing regulations mandating the identification or declaration of AI-generated content. However, as of now, no formal regulations exist globally for this purpose, though discussions and considerations within governmental spheres likely take place.
The future may necessitate formal regulations to address the growing threat of deep fakes, ensuring accountability and transparency in digital content to counter potential misuse and the spread of misinformation.
What kind of discussions are the tech giants are having across the world? Or intergovernmental discussions?
Tech giants like Google and Facebook are engaging in internal discussions on the ethical use of AI systems. Numerous high-profile instances of ethicists being dismissed from companies like Google highlight the internal debates surrounding AI ethics. However, despite these discussions, there have been concerns about the ethical implications of releasing iterations like ChatGPT with Vision have been flagged within the companies’ white papers for potential unreliability, especially in critical situations. However, these warnings have not deterred the release of these technologies to the public.
As for intergovernmental discussions, it’s unclear how extensively these conversations are being carried out and what actions are being taken. Governments typically move slower than tech companies in responding to emerging technological challenges. Consequently, when tech companies release technology without full resolution of ethical concerns, governments are forced to react rather than proactively regulate.
Nepal recently banned TikTok. Do you think that can control misinformation, fake videos and data privacy, as intended?
Banning platforms like TikTok or individual social media platforms may not effectively address the issues of misinformation, fake videos, or data privacy breaches. When one platform gets banned, users often migrate to other existing platforms, circumventing the ban and continuing similar activities. Whereas, a comprehensive ban on all social media platforms seems impractical and ineffective.
Regulation and stringent enforcement could be more impactful than outright bans. Imposing severe penalties and fines on platforms for violating data privacy or propagating misinformation could act as a deterrent. However, it’s crucial to ensure that the penalties are substantial enough to influence these massive corporations to alter their practices significantly. So far it is just a dent in their massive turnovers and valuation.
Governments need to focus on robust regulation coupled with substantial penalties proportional to the scale and impact of violations. While bans might restrict specific platforms, they don’t address the underlying issues and often lead to users seeking alternative outlets. Effective governance through stringent yet proportionate regulations and penalties stands as a more viable approach to tackling issues within social media platforms.
Given your expertise, could you give a few suggestions to Nepal, in terms of what kind of issues to focus on and what kind of infrastructures or experts we need to produce?
Nepal, like many other countries, should prioritise the establishment of robust data protection and privacy regulations. Given the extensive data collection by various platforms, stringent legal frameworks are essential to safeguard citizens’ privacy.
Emulating the European Union’s approach to data protection could serve as a model. These regulations are crucial as they not only protect personal data but also mitigate cyber attacks and breaches of tech giants that can affect countless individuals due to unregulated data-sharing practices.
Furthermore, all governments globally, including Nepal, should focus on addressing the challenges surrounding AI regulation and accountability. Establishing frameworks for responsible AI use is imperative to prevent potential dangers, such as undetectable deep fakes, which could pose severe threats. Intergovernmental discussions about AI are ongoing, but concrete steps toward effective regulation and accountability measures are yet to be fully realised.
A critical aspect of this process is ensuring that regulations are not overly influenced by the profit motives of companies producing AI technologies, as it could impede robust and effective regulatory frameworks. Therefore, emphasis should be placed on developing accountable and unbiased regulatory bodies to address these emerging challenges effectively.